00:00:00.000 Started by upstream project "autotest-per-patch" build number 127154 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24302 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.110 The recommended git tool is: git 00:00:00.110 using credential 00000000-0000-0000-0000-000000000002 00:00:00.112 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.188 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.269 Using shallow fetch with depth 1 00:00:00.269 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.269 > git --version # timeout=10 00:00:00.330 > git --version # 'git version 2.39.2' 00:00:00.330 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.367 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.367 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/7 # timeout=5 00:00:06.333 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.345 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.358 Checking out Revision 178f233a2a13202f6c9967830fd93e30560100d5 (FETCH_HEAD) 00:00:06.358 > git config core.sparsecheckout # timeout=10 00:00:06.373 > git read-tree -mu HEAD # timeout=10 00:00:06.390 > git checkout -f 178f233a2a13202f6c9967830fd93e30560100d5 # timeout=5 00:00:06.413 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:06.413 > git rev-list --no-walk b96e2fd4fd67f35d13e68ed8cd11d67d819ff3fc # timeout=10 00:00:06.510 [Pipeline] Start of Pipeline 00:00:06.523 [Pipeline] library 00:00:06.525 Loading library shm_lib@master 00:00:06.525 Library shm_lib@master is cached. Copying from home. 00:00:06.539 [Pipeline] node 00:00:21.541 Still waiting to schedule task 00:00:21.541 Waiting for next available executor on ‘vagrant-vm-host’ 00:08:14.309 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:08:14.311 [Pipeline] { 00:08:14.323 [Pipeline] catchError 00:08:14.325 [Pipeline] { 00:08:14.349 [Pipeline] wrap 00:08:14.357 [Pipeline] { 00:08:14.364 [Pipeline] stage 00:08:14.366 [Pipeline] { (Prologue) 00:08:14.382 [Pipeline] echo 00:08:14.383 Node: VM-host-WFP7 00:08:14.388 [Pipeline] cleanWs 00:08:14.396 [WS-CLEANUP] Deleting project workspace... 00:08:14.396 [WS-CLEANUP] Deferred wipeout is used... 00:08:14.403 [WS-CLEANUP] done 00:08:14.856 [Pipeline] setCustomBuildProperty 00:08:14.989 [Pipeline] httpRequest 00:08:15.014 [Pipeline] echo 00:08:15.015 Sorcerer 10.211.164.101 is alive 00:08:15.024 [Pipeline] httpRequest 00:08:15.029 HttpMethod: GET 00:08:15.030 URL: http://10.211.164.101/packages/jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:08:15.030 Sending request to url: http://10.211.164.101/packages/jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:08:15.031 Response Code: HTTP/1.1 200 OK 00:08:15.032 Success: Status code 200 is in the accepted range: 200,404 00:08:15.032 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:08:15.177 [Pipeline] sh 00:08:15.462 + tar --no-same-owner -xf jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:08:15.478 [Pipeline] httpRequest 00:08:15.497 [Pipeline] echo 00:08:15.499 Sorcerer 10.211.164.101 is alive 00:08:15.509 [Pipeline] httpRequest 00:08:15.521 HttpMethod: GET 00:08:15.521 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:08:15.521 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:08:15.522 Response Code: HTTP/1.1 200 OK 00:08:15.522 Success: Status code 200 is in the accepted range: 200,404 00:08:15.523 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:08:17.823 [Pipeline] sh 00:08:18.107 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:08:20.657 [Pipeline] sh 00:08:20.937 + git -C spdk log --oneline -n5 00:08:20.937 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:08:20.937 fc2398dfa raid: clear base bdev configure_cb after executing 00:08:20.937 5558f3f50 raid: complete bdev_raid_create after sb is written 00:08:20.937 d005e023b raid: fix empty slot not updated in sb after resize 00:08:20.937 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:08:20.958 [Pipeline] writeFile 00:08:20.976 [Pipeline] sh 00:08:21.310 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:08:21.322 [Pipeline] sh 00:08:21.607 + cat autorun-spdk.conf 00:08:21.607 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:21.607 SPDK_TEST_NVME=1 00:08:21.607 SPDK_TEST_FTL=1 00:08:21.607 SPDK_TEST_ISAL=1 00:08:21.607 SPDK_RUN_ASAN=1 00:08:21.607 SPDK_RUN_UBSAN=1 00:08:21.607 SPDK_TEST_XNVME=1 00:08:21.607 SPDK_TEST_NVME_FDP=1 00:08:21.607 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:21.615 RUN_NIGHTLY=0 00:08:21.617 [Pipeline] } 00:08:21.634 [Pipeline] // stage 00:08:21.651 [Pipeline] stage 00:08:21.653 [Pipeline] { (Run VM) 00:08:21.664 [Pipeline] sh 00:08:21.942 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:08:21.942 + echo 'Start stage prepare_nvme.sh' 00:08:21.942 Start stage prepare_nvme.sh 00:08:21.942 + [[ -n 0 ]] 00:08:21.942 + disk_prefix=ex0 00:08:21.942 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:08:21.942 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:08:21.942 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:08:21.942 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:21.942 ++ SPDK_TEST_NVME=1 00:08:21.942 ++ SPDK_TEST_FTL=1 00:08:21.942 ++ SPDK_TEST_ISAL=1 00:08:21.942 ++ SPDK_RUN_ASAN=1 00:08:21.942 ++ SPDK_RUN_UBSAN=1 00:08:21.942 ++ SPDK_TEST_XNVME=1 00:08:21.942 ++ SPDK_TEST_NVME_FDP=1 00:08:21.942 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:21.942 ++ RUN_NIGHTLY=0 00:08:21.942 + cd /var/jenkins/workspace/nvme-vg-autotest 00:08:21.942 + nvme_files=() 00:08:21.942 + declare -A nvme_files 00:08:21.942 + backend_dir=/var/lib/libvirt/images/backends 00:08:21.942 + nvme_files['nvme.img']=5G 00:08:21.942 + nvme_files['nvme-cmb.img']=5G 00:08:21.942 + nvme_files['nvme-multi0.img']=4G 00:08:21.942 + nvme_files['nvme-multi1.img']=4G 00:08:21.943 + nvme_files['nvme-multi2.img']=4G 00:08:21.943 + nvme_files['nvme-openstack.img']=8G 00:08:21.943 + nvme_files['nvme-zns.img']=5G 00:08:21.943 + (( SPDK_TEST_NVME_PMR == 1 )) 00:08:21.943 + (( SPDK_TEST_FTL == 1 )) 00:08:21.943 + nvme_files["nvme-ftl.img"]=6G 00:08:21.943 + (( SPDK_TEST_NVME_FDP == 1 )) 00:08:21.943 + nvme_files["nvme-fdp.img"]=1G 00:08:21.943 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:08:21.943 + for nvme in "${!nvme_files[@]}" 00:08:21.943 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:08:21.943 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:08:21.943 + for nvme in "${!nvme_files[@]}" 00:08:21.943 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:08:21.943 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:08:21.943 + for nvme in "${!nvme_files[@]}" 00:08:21.943 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:08:21.943 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:08:21.943 + for nvme in "${!nvme_files[@]}" 00:08:21.943 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:08:22.202 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:08:22.202 + for nvme in "${!nvme_files[@]}" 00:08:22.202 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:08:22.202 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:08:22.202 + for nvme in "${!nvme_files[@]}" 00:08:22.202 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:08:22.202 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:08:22.202 + for nvme in "${!nvme_files[@]}" 00:08:22.202 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:08:22.202 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:08:22.202 + for nvme in "${!nvme_files[@]}" 00:08:22.202 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:08:22.202 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:08:22.202 + for nvme in "${!nvme_files[@]}" 00:08:22.202 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:08:22.462 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:08:22.462 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:08:22.462 + echo 'End stage prepare_nvme.sh' 00:08:22.462 End stage prepare_nvme.sh 00:08:22.475 [Pipeline] sh 00:08:22.760 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:08:22.760 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:08:22.760 00:08:22.760 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:08:22.760 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:08:22.760 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:08:22.760 HELP=0 00:08:22.760 DRY_RUN=0 00:08:22.760 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:08:22.760 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:08:22.760 NVME_AUTO_CREATE=0 00:08:22.760 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:08:22.760 NVME_CMB=,,,, 00:08:22.760 NVME_PMR=,,,, 00:08:22.760 NVME_ZNS=,,,, 00:08:22.760 NVME_MS=true,,,, 00:08:22.760 NVME_FDP=,,,on, 00:08:22.760 SPDK_VAGRANT_DISTRO=fedora38 00:08:22.760 SPDK_VAGRANT_VMCPU=10 00:08:22.760 SPDK_VAGRANT_VMRAM=12288 00:08:22.760 SPDK_VAGRANT_PROVIDER=libvirt 00:08:22.760 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:08:22.760 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:08:22.760 SPDK_OPENSTACK_NETWORK=0 00:08:22.760 VAGRANT_PACKAGE_BOX=0 00:08:22.760 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:08:22.760 FORCE_DISTRO=true 00:08:22.760 VAGRANT_BOX_VERSION= 00:08:22.760 EXTRA_VAGRANTFILES= 00:08:22.760 NIC_MODEL=virtio 00:08:22.760 00:08:22.760 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:08:22.760 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:08:25.312 Bringing machine 'default' up with 'libvirt' provider... 00:08:25.880 ==> default: Creating image (snapshot of base box volume). 00:08:25.880 ==> default: Creating domain with the following settings... 00:08:25.880 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721899346_12b5d2767cd864acd270 00:08:25.880 ==> default: -- Domain type: kvm 00:08:25.880 ==> default: -- Cpus: 10 00:08:25.880 ==> default: -- Feature: acpi 00:08:25.880 ==> default: -- Feature: apic 00:08:25.880 ==> default: -- Feature: pae 00:08:25.880 ==> default: -- Memory: 12288M 00:08:25.880 ==> default: -- Memory Backing: hugepages: 00:08:25.880 ==> default: -- Management MAC: 00:08:25.880 ==> default: -- Loader: 00:08:25.880 ==> default: -- Nvram: 00:08:25.880 ==> default: -- Base box: spdk/fedora38 00:08:25.880 ==> default: -- Storage pool: default 00:08:25.880 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721899346_12b5d2767cd864acd270.img (20G) 00:08:25.880 ==> default: -- Volume Cache: default 00:08:25.880 ==> default: -- Kernel: 00:08:25.880 ==> default: -- Initrd: 00:08:25.880 ==> default: -- Graphics Type: vnc 00:08:25.880 ==> default: -- Graphics Port: -1 00:08:25.880 ==> default: -- Graphics IP: 127.0.0.1 00:08:25.880 ==> default: -- Graphics Password: Not defined 00:08:25.880 ==> default: -- Video Type: cirrus 00:08:25.880 ==> default: -- Video VRAM: 9216 00:08:25.880 ==> default: -- Sound Type: 00:08:25.880 ==> default: -- Keymap: en-us 00:08:25.880 ==> default: -- TPM Path: 00:08:25.880 ==> default: -- INPUT: type=mouse, bus=ps2 00:08:25.880 ==> default: -- Command line args: 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:08:25.880 ==> default: -> value=-drive, 00:08:25.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:08:25.880 ==> default: -> value=-drive, 00:08:25.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:08:25.880 ==> default: -> value=-drive, 00:08:25.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:25.880 ==> default: -> value=-drive, 00:08:25.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:08:25.880 ==> default: -> value=-device, 00:08:25.880 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:25.880 ==> default: -> value=-drive, 00:08:25.880 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:08:25.881 ==> default: -> value=-device, 00:08:25.881 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:25.881 ==> default: -> value=-device, 00:08:25.881 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:08:25.881 ==> default: -> value=-device, 00:08:25.881 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:08:25.881 ==> default: -> value=-drive, 00:08:25.881 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:08:25.881 ==> default: -> value=-device, 00:08:25.881 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:08:26.139 ==> default: Creating shared folders metadata... 00:08:26.139 ==> default: Starting domain. 00:08:27.544 ==> default: Waiting for domain to get an IP address... 00:08:45.742 ==> default: Waiting for SSH to become available... 00:08:45.742 ==> default: Configuring and enabling network interfaces... 00:08:49.928 default: SSH address: 192.168.121.225:22 00:08:49.928 default: SSH username: vagrant 00:08:49.928 default: SSH auth method: private key 00:08:52.461 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:09:02.456 ==> default: Mounting SSHFS shared folder... 00:09:03.065 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:09:03.065 ==> default: Checking Mount.. 00:09:04.441 ==> default: Folder Successfully Mounted! 00:09:04.441 ==> default: Running provisioner: file... 00:09:05.376 default: ~/.gitconfig => .gitconfig 00:09:05.971 00:09:05.971 SUCCESS! 00:09:05.971 00:09:05.971 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:09:05.971 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:09:05.971 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:09:05.971 00:09:05.981 [Pipeline] } 00:09:06.000 [Pipeline] // stage 00:09:06.010 [Pipeline] dir 00:09:06.011 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:09:06.013 [Pipeline] { 00:09:06.029 [Pipeline] catchError 00:09:06.031 [Pipeline] { 00:09:06.046 [Pipeline] sh 00:09:06.329 + vagrant ssh-config --host vagrant+ 00:09:06.329 sed -ne /^Host/,$p 00:09:06.329 + tee ssh_conf 00:09:09.619 Host vagrant 00:09:09.619 HostName 192.168.121.225 00:09:09.619 User vagrant 00:09:09.619 Port 22 00:09:09.619 UserKnownHostsFile /dev/null 00:09:09.619 StrictHostKeyChecking no 00:09:09.619 PasswordAuthentication no 00:09:09.619 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:09:09.619 IdentitiesOnly yes 00:09:09.619 LogLevel FATAL 00:09:09.619 ForwardAgent yes 00:09:09.619 ForwardX11 yes 00:09:09.619 00:09:09.633 [Pipeline] withEnv 00:09:09.636 [Pipeline] { 00:09:09.650 [Pipeline] sh 00:09:09.931 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:09:09.931 source /etc/os-release 00:09:09.931 [[ -e /image.version ]] && img=$(< /image.version) 00:09:09.931 # Minimal, systemd-like check. 00:09:09.931 if [[ -e /.dockerenv ]]; then 00:09:09.931 # Clear garbage from the node's name: 00:09:09.931 # agt-er_autotest_547-896 -> autotest_547-896 00:09:09.931 # $HOSTNAME is the actual container id 00:09:09.931 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:09:09.931 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:09:09.931 # We can assume this is a mount from a host where container is running, 00:09:09.931 # so fetch its hostname to easily identify the target swarm worker. 00:09:09.931 container="$(< /etc/hostname) ($agent)" 00:09:09.931 else 00:09:09.931 # Fallback 00:09:09.931 container=$agent 00:09:09.931 fi 00:09:09.931 fi 00:09:09.931 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:09:09.931 00:09:10.202 [Pipeline] } 00:09:10.223 [Pipeline] // withEnv 00:09:10.233 [Pipeline] setCustomBuildProperty 00:09:10.253 [Pipeline] stage 00:09:10.256 [Pipeline] { (Tests) 00:09:10.275 [Pipeline] sh 00:09:10.562 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:09:10.836 [Pipeline] sh 00:09:11.118 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:09:11.393 [Pipeline] timeout 00:09:11.394 Timeout set to expire in 40 min 00:09:11.396 [Pipeline] { 00:09:11.412 [Pipeline] sh 00:09:11.694 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:09:12.262 HEAD is now at 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:09:12.275 [Pipeline] sh 00:09:12.557 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:09:12.830 [Pipeline] sh 00:09:13.114 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:09:13.391 [Pipeline] sh 00:09:13.672 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:09:13.931 ++ readlink -f spdk_repo 00:09:13.931 + DIR_ROOT=/home/vagrant/spdk_repo 00:09:13.931 + [[ -n /home/vagrant/spdk_repo ]] 00:09:13.931 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:09:13.931 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:09:13.931 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:09:13.931 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:09:13.931 + [[ -d /home/vagrant/spdk_repo/output ]] 00:09:13.931 + [[ nvme-vg-autotest == pkgdep-* ]] 00:09:13.931 + cd /home/vagrant/spdk_repo 00:09:13.931 + source /etc/os-release 00:09:13.931 ++ NAME='Fedora Linux' 00:09:13.931 ++ VERSION='38 (Cloud Edition)' 00:09:13.931 ++ ID=fedora 00:09:13.931 ++ VERSION_ID=38 00:09:13.931 ++ VERSION_CODENAME= 00:09:13.931 ++ PLATFORM_ID=platform:f38 00:09:13.931 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:09:13.931 ++ ANSI_COLOR='0;38;2;60;110;180' 00:09:13.931 ++ LOGO=fedora-logo-icon 00:09:13.931 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:09:13.931 ++ HOME_URL=https://fedoraproject.org/ 00:09:13.931 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:09:13.931 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:09:13.931 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:09:13.931 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:09:13.931 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:09:13.931 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:09:13.931 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:09:13.931 ++ SUPPORT_END=2024-05-14 00:09:13.931 ++ VARIANT='Cloud Edition' 00:09:13.931 ++ VARIANT_ID=cloud 00:09:13.931 + uname -a 00:09:13.931 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:09:13.931 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:09:14.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.756 Hugepages 00:09:14.756 node hugesize free / total 00:09:14.756 node0 1048576kB 0 / 0 00:09:14.756 node0 2048kB 0 / 0 00:09:14.756 00:09:14.756 Type BDF Vendor Device NUMA Driver Device Block devices 00:09:14.756 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:09:14.756 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:09:14.756 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:09:14.756 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:09:14.756 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:09:14.756 + rm -f /tmp/spdk-ld-path 00:09:14.756 + source autorun-spdk.conf 00:09:14.756 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:09:14.757 ++ SPDK_TEST_NVME=1 00:09:14.757 ++ SPDK_TEST_FTL=1 00:09:14.757 ++ SPDK_TEST_ISAL=1 00:09:14.757 ++ SPDK_RUN_ASAN=1 00:09:14.757 ++ SPDK_RUN_UBSAN=1 00:09:14.757 ++ SPDK_TEST_XNVME=1 00:09:14.757 ++ SPDK_TEST_NVME_FDP=1 00:09:14.757 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:14.757 ++ RUN_NIGHTLY=0 00:09:14.757 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:09:14.757 + [[ -n '' ]] 00:09:14.757 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:09:14.757 + for M in /var/spdk/build-*-manifest.txt 00:09:14.757 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:09:14.757 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:14.757 + for M in /var/spdk/build-*-manifest.txt 00:09:14.757 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:09:14.757 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:09:14.757 ++ uname 00:09:14.757 + [[ Linux == \L\i\n\u\x ]] 00:09:14.757 + sudo dmesg -T 00:09:14.757 + sudo dmesg --clear 00:09:14.757 + dmesg_pid=5359 00:09:14.757 + sudo dmesg -Tw 00:09:14.757 + [[ Fedora Linux == FreeBSD ]] 00:09:14.757 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:14.757 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:14.757 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:09:14.757 + [[ -x /usr/src/fio-static/fio ]] 00:09:14.757 + export FIO_BIN=/usr/src/fio-static/fio 00:09:14.757 + FIO_BIN=/usr/src/fio-static/fio 00:09:14.757 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:09:14.757 + [[ ! -v VFIO_QEMU_BIN ]] 00:09:14.757 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:09:14.757 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:14.757 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:14.757 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:09:14.757 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:14.757 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:14.757 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:09:14.757 Test configuration: 00:09:14.757 SPDK_RUN_FUNCTIONAL_TEST=1 00:09:14.757 SPDK_TEST_NVME=1 00:09:14.757 SPDK_TEST_FTL=1 00:09:14.757 SPDK_TEST_ISAL=1 00:09:14.757 SPDK_RUN_ASAN=1 00:09:14.757 SPDK_RUN_UBSAN=1 00:09:14.757 SPDK_TEST_XNVME=1 00:09:14.757 SPDK_TEST_NVME_FDP=1 00:09:14.757 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:09:15.040 RUN_NIGHTLY=0 09:23:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:15.040 09:23:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:09:15.040 09:23:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.040 09:23:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.040 09:23:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.040 09:23:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.040 09:23:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.040 09:23:15 -- paths/export.sh@5 -- $ export PATH 00:09:15.040 09:23:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.040 09:23:15 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:09:15.040 09:23:15 -- common/autobuild_common.sh@447 -- $ date +%s 00:09:15.040 09:23:15 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721899395.XXXXXX 00:09:15.040 09:23:15 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721899395.zyCNNJ 00:09:15.040 09:23:15 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:09:15.040 09:23:15 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:09:15.040 09:23:15 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:09:15.040 09:23:15 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:09:15.040 09:23:15 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:09:15.040 09:23:15 -- common/autobuild_common.sh@463 -- $ get_config_params 00:09:15.040 09:23:15 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:09:15.040 09:23:15 -- common/autotest_common.sh@10 -- $ set +x 00:09:15.040 09:23:15 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:09:15.040 09:23:15 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:09:15.040 09:23:15 -- pm/common@17 -- $ local monitor 00:09:15.040 09:23:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.040 09:23:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:09:15.040 09:23:15 -- pm/common@25 -- $ sleep 1 00:09:15.040 09:23:15 -- pm/common@21 -- $ date +%s 00:09:15.040 09:23:15 -- pm/common@21 -- $ date +%s 00:09:15.040 09:23:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721899395 00:09:15.040 09:23:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721899395 00:09:15.040 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721899395_collect-vmstat.pm.log 00:09:15.040 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721899395_collect-cpu-load.pm.log 00:09:15.976 09:23:16 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:09:15.976 09:23:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:09:15.976 09:23:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:09:15.976 09:23:16 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:15.976 09:23:16 -- spdk/autobuild.sh@16 -- $ date -u 00:09:15.976 Thu Jul 25 09:23:16 AM UTC 2024 00:09:15.976 09:23:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:09:15.976 v24.09-pre-321-g704257090 00:09:15.976 09:23:16 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:09:15.976 09:23:16 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:09:15.976 09:23:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:09:15.976 09:23:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:09:15.976 09:23:16 -- common/autotest_common.sh@10 -- $ set +x 00:09:15.976 ************************************ 00:09:15.976 START TEST asan 00:09:15.976 ************************************ 00:09:15.976 using asan 00:09:15.976 09:23:16 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:09:15.976 00:09:15.976 real 0m0.001s 00:09:15.976 user 0m0.000s 00:09:15.976 sys 0m0.000s 00:09:15.976 09:23:16 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:09:15.976 09:23:16 asan -- common/autotest_common.sh@10 -- $ set +x 00:09:15.976 ************************************ 00:09:15.976 END TEST asan 00:09:15.976 ************************************ 00:09:16.235 09:23:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:09:16.235 09:23:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:09:16.235 09:23:16 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:09:16.235 09:23:16 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:09:16.235 09:23:16 -- common/autotest_common.sh@10 -- $ set +x 00:09:16.235 ************************************ 00:09:16.235 START TEST ubsan 00:09:16.235 ************************************ 00:09:16.235 using ubsan 00:09:16.235 09:23:16 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:09:16.235 00:09:16.235 real 0m0.000s 00:09:16.235 user 0m0.000s 00:09:16.235 sys 0m0.000s 00:09:16.235 09:23:16 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:09:16.235 09:23:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:09:16.235 ************************************ 00:09:16.235 END TEST ubsan 00:09:16.235 ************************************ 00:09:16.235 09:23:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:09:16.235 09:23:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:16.235 09:23:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:16.235 09:23:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:16.235 09:23:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:16.235 09:23:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:16.235 09:23:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:16.235 09:23:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:16.235 09:23:16 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:09:16.235 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:16.235 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:09:16.803 Using 'verbs' RDMA provider 00:09:33.147 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:48.041 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:48.041 Creating mk/config.mk...done. 00:09:48.041 Creating mk/cc.flags.mk...done. 00:09:48.041 Type 'make' to build. 00:09:48.041 09:23:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:09:48.041 09:23:46 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:09:48.041 09:23:46 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:09:48.041 09:23:46 -- common/autotest_common.sh@10 -- $ set +x 00:09:48.041 ************************************ 00:09:48.041 START TEST make 00:09:48.041 ************************************ 00:09:48.041 09:23:46 make -- common/autotest_common.sh@1125 -- $ make -j10 00:09:48.041 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:09:48.041 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:09:48.041 meson setup builddir \ 00:09:48.041 -Dwith-libaio=enabled \ 00:09:48.041 -Dwith-liburing=enabled \ 00:09:48.041 -Dwith-libvfn=disabled \ 00:09:48.041 -Dwith-spdk=false && \ 00:09:48.041 meson compile -C builddir && \ 00:09:48.041 cd -) 00:09:48.041 make[1]: Nothing to be done for 'all'. 00:09:48.979 The Meson build system 00:09:48.979 Version: 1.3.1 00:09:48.979 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:09:48.979 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:09:48.979 Build type: native build 00:09:48.979 Project name: xnvme 00:09:48.979 Project version: 0.7.3 00:09:48.979 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:09:48.979 C linker for the host machine: cc ld.bfd 2.39-16 00:09:48.979 Host machine cpu family: x86_64 00:09:48.979 Host machine cpu: x86_64 00:09:48.979 Message: host_machine.system: linux 00:09:48.979 Compiler for C supports arguments -Wno-missing-braces: YES 00:09:48.979 Compiler for C supports arguments -Wno-cast-function-type: YES 00:09:48.979 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:09:48.979 Run-time dependency threads found: YES 00:09:48.979 Has header "setupapi.h" : NO 00:09:48.979 Has header "linux/blkzoned.h" : YES 00:09:48.979 Has header "linux/blkzoned.h" : YES (cached) 00:09:48.979 Has header "libaio.h" : YES 00:09:48.979 Library aio found: YES 00:09:48.979 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:09:48.979 Run-time dependency liburing found: YES 2.2 00:09:48.979 Dependency libvfn skipped: feature with-libvfn disabled 00:09:48.979 Run-time dependency appleframeworks found: NO (tried framework) 00:09:48.979 Run-time dependency appleframeworks found: NO (tried framework) 00:09:48.979 Configuring xnvme_config.h using configuration 00:09:48.979 Configuring xnvme.spec using configuration 00:09:48.979 Run-time dependency bash-completion found: YES 2.11 00:09:48.979 Message: Bash-completions: /usr/share/bash-completion/completions 00:09:48.979 Program cp found: YES (/usr/bin/cp) 00:09:48.979 Has header "winsock2.h" : NO 00:09:48.979 Has header "dbghelp.h" : NO 00:09:48.979 Library rpcrt4 found: NO 00:09:48.979 Library rt found: YES 00:09:48.979 Checking for function "clock_gettime" with dependency -lrt: YES 00:09:48.979 Found CMake: /usr/bin/cmake (3.27.7) 00:09:48.979 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:09:48.979 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:09:48.979 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:09:48.979 Build targets in project: 32 00:09:48.979 00:09:48.979 xnvme 0.7.3 00:09:48.979 00:09:48.979 User defined options 00:09:48.979 with-libaio : enabled 00:09:48.979 with-liburing: enabled 00:09:48.979 with-libvfn : disabled 00:09:48.979 with-spdk : false 00:09:48.979 00:09:48.979 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:09:49.548 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:09:49.548 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:09:49.548 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:09:49.548 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:09:49.548 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:09:49.548 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:09:49.548 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:09:49.548 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:09:49.548 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:09:49.548 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:09:49.548 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:09:49.548 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:09:49.548 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:09:49.548 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:09:49.807 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:09:49.807 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:09:49.807 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:09:49.807 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:09:49.807 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:09:49.807 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:09:49.807 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:09:49.807 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:09:49.807 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:09:49.807 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:09:49.807 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:09:49.807 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:09:49.807 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:09:49.807 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:09:49.807 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:09:49.807 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:09:49.807 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:09:49.807 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:09:49.807 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:09:49.807 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:09:49.807 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:09:49.807 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:09:49.807 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:09:49.807 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:09:49.807 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:09:50.065 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:09:50.065 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:09:50.065 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:09:50.066 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:09:50.066 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:09:50.066 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:09:50.066 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:09:50.066 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:09:50.066 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:09:50.066 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:09:50.066 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:09:50.066 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:09:50.066 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:09:50.066 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:09:50.066 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:09:50.066 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:09:50.066 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:09:50.066 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:09:50.066 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:09:50.066 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:09:50.066 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:09:50.066 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:09:50.066 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:09:50.066 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:09:50.324 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:09:50.324 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:09:50.324 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:09:50.324 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:09:50.324 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:09:50.324 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:09:50.324 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:09:50.324 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:09:50.324 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:09:50.324 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:09:50.324 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:09:50.324 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:09:50.324 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:09:50.324 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:09:50.324 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:09:50.324 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:09:50.324 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:09:50.585 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:09:50.585 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:09:50.585 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:09:50.585 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:09:50.585 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:09:50.585 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:09:50.585 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:09:50.585 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:09:50.585 [88/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:09:50.585 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:09:50.585 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:09:50.585 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:09:50.585 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:09:50.585 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:09:50.585 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:09:50.585 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:09:50.585 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:09:50.845 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:09:50.845 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:09:50.845 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:09:50.845 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:09:50.845 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:09:50.845 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:09:50.845 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:09:50.845 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:09:50.845 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:09:50.845 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:09:50.845 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:09:50.845 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:09:50.845 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:09:50.845 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:09:50.845 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:09:50.845 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:09:50.845 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:09:50.845 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:09:50.845 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:09:50.845 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:09:50.845 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:09:50.845 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:09:50.845 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:09:50.845 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:09:50.845 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:09:50.845 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:09:50.845 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:09:50.845 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:09:50.845 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:09:50.845 [126/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:09:50.845 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:09:50.845 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:09:50.845 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:09:51.105 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:09:51.105 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:09:51.105 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:09:51.105 [133/203] Linking target lib/libxnvme.so 00:09:51.105 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:09:51.105 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:09:51.105 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:09:51.105 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:09:51.105 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:09:51.105 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:09:51.105 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:09:51.105 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:09:51.105 [142/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:09:51.105 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:09:51.105 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:09:51.365 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:09:51.365 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:09:51.365 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:09:51.365 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:09:51.365 [149/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:09:51.365 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:09:51.365 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:09:51.365 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:09:51.365 [153/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:09:51.365 [154/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:09:51.365 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:09:51.365 [156/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:09:51.365 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:09:51.624 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:09:51.624 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:09:51.624 [160/203] Compiling C object tools/kvs.p/kvs.c.o 00:09:51.624 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:09:51.624 [162/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:09:51.624 [163/203] Compiling C object tools/xdd.p/xdd.c.o 00:09:51.624 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:09:51.624 [165/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:09:51.624 [166/203] Compiling C object tools/lblk.p/lblk.c.o 00:09:51.624 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:09:51.624 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:09:51.624 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:09:51.624 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:09:51.882 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:09:51.882 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:09:51.882 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:09:51.882 [174/203] Linking static target lib/libxnvme.a 00:09:51.882 [175/203] Linking target tests/xnvme_tests_cli 00:09:51.882 [176/203] Linking target tests/xnvme_tests_buf 00:09:51.882 [177/203] Linking target tests/xnvme_tests_async_intf 00:09:51.882 [178/203] Linking target tests/xnvme_tests_lblk 00:09:51.882 [179/203] Linking target tests/xnvme_tests_znd_append 00:09:51.882 [180/203] Linking target tests/xnvme_tests_enum 00:09:51.882 [181/203] Linking target tests/xnvme_tests_scc 00:09:51.883 [182/203] Linking target tests/xnvme_tests_xnvme_cli 00:09:51.883 [183/203] Linking target tests/xnvme_tests_znd_explicit_open 00:09:51.883 [184/203] Linking target tests/xnvme_tests_ioworker 00:09:51.883 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:09:51.883 [186/203] Linking target tests/xnvme_tests_znd_state 00:09:51.883 [187/203] Linking target tests/xnvme_tests_znd_zrwa 00:09:51.883 [188/203] Linking target tests/xnvme_tests_kvs 00:09:51.883 [189/203] Linking target tests/xnvme_tests_map 00:09:51.883 [190/203] Linking target tools/lblk 00:09:51.883 [191/203] Linking target tools/xnvme 00:09:52.142 [192/203] Linking target examples/xnvme_enum 00:09:52.142 [193/203] Linking target tools/xdd 00:09:52.142 [194/203] Linking target tools/kvs 00:09:52.142 [195/203] Linking target tools/zoned 00:09:52.142 [196/203] Linking target examples/xnvme_hello 00:09:52.142 [197/203] Linking target tools/xnvme_file 00:09:52.142 [198/203] Linking target examples/zoned_io_sync 00:09:52.142 [199/203] Linking target examples/xnvme_dev 00:09:52.142 [200/203] Linking target examples/xnvme_single_async 00:09:52.142 [201/203] Linking target examples/zoned_io_async 00:09:52.142 [202/203] Linking target examples/xnvme_io_async 00:09:52.142 [203/203] Linking target examples/xnvme_single_sync 00:09:52.142 INFO: autodetecting backend as ninja 00:09:52.142 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:09:52.142 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:10:00.255 The Meson build system 00:10:00.255 Version: 1.3.1 00:10:00.255 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:10:00.255 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:10:00.255 Build type: native build 00:10:00.255 Program cat found: YES (/usr/bin/cat) 00:10:00.255 Project name: DPDK 00:10:00.255 Project version: 24.03.0 00:10:00.255 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:10:00.255 C linker for the host machine: cc ld.bfd 2.39-16 00:10:00.255 Host machine cpu family: x86_64 00:10:00.255 Host machine cpu: x86_64 00:10:00.255 Message: ## Building in Developer Mode ## 00:10:00.255 Program pkg-config found: YES (/usr/bin/pkg-config) 00:10:00.255 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:10:00.255 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:10:00.255 Program python3 found: YES (/usr/bin/python3) 00:10:00.255 Program cat found: YES (/usr/bin/cat) 00:10:00.255 Compiler for C supports arguments -march=native: YES 00:10:00.255 Checking for size of "void *" : 8 00:10:00.255 Checking for size of "void *" : 8 (cached) 00:10:00.255 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:10:00.255 Library m found: YES 00:10:00.255 Library numa found: YES 00:10:00.255 Has header "numaif.h" : YES 00:10:00.255 Library fdt found: NO 00:10:00.255 Library execinfo found: NO 00:10:00.255 Has header "execinfo.h" : YES 00:10:00.255 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:10:00.255 Run-time dependency libarchive found: NO (tried pkgconfig) 00:10:00.255 Run-time dependency libbsd found: NO (tried pkgconfig) 00:10:00.255 Run-time dependency jansson found: NO (tried pkgconfig) 00:10:00.255 Run-time dependency openssl found: YES 3.0.9 00:10:00.255 Run-time dependency libpcap found: YES 1.10.4 00:10:00.255 Has header "pcap.h" with dependency libpcap: YES 00:10:00.255 Compiler for C supports arguments -Wcast-qual: YES 00:10:00.255 Compiler for C supports arguments -Wdeprecated: YES 00:10:00.255 Compiler for C supports arguments -Wformat: YES 00:10:00.255 Compiler for C supports arguments -Wformat-nonliteral: NO 00:10:00.255 Compiler for C supports arguments -Wformat-security: NO 00:10:00.255 Compiler for C supports arguments -Wmissing-declarations: YES 00:10:00.255 Compiler for C supports arguments -Wmissing-prototypes: YES 00:10:00.255 Compiler for C supports arguments -Wnested-externs: YES 00:10:00.255 Compiler for C supports arguments -Wold-style-definition: YES 00:10:00.255 Compiler for C supports arguments -Wpointer-arith: YES 00:10:00.255 Compiler for C supports arguments -Wsign-compare: YES 00:10:00.255 Compiler for C supports arguments -Wstrict-prototypes: YES 00:10:00.255 Compiler for C supports arguments -Wundef: YES 00:10:00.255 Compiler for C supports arguments -Wwrite-strings: YES 00:10:00.255 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:10:00.255 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:10:00.255 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:10:00.255 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:10:00.255 Program objdump found: YES (/usr/bin/objdump) 00:10:00.255 Compiler for C supports arguments -mavx512f: YES 00:10:00.255 Checking if "AVX512 checking" compiles: YES 00:10:00.255 Fetching value of define "__SSE4_2__" : 1 00:10:00.255 Fetching value of define "__AES__" : 1 00:10:00.255 Fetching value of define "__AVX__" : 1 00:10:00.255 Fetching value of define "__AVX2__" : 1 00:10:00.255 Fetching value of define "__AVX512BW__" : 1 00:10:00.255 Fetching value of define "__AVX512CD__" : 1 00:10:00.255 Fetching value of define "__AVX512DQ__" : 1 00:10:00.255 Fetching value of define "__AVX512F__" : 1 00:10:00.255 Fetching value of define "__AVX512VL__" : 1 00:10:00.255 Fetching value of define "__PCLMUL__" : 1 00:10:00.255 Fetching value of define "__RDRND__" : 1 00:10:00.255 Fetching value of define "__RDSEED__" : 1 00:10:00.255 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:10:00.255 Fetching value of define "__znver1__" : (undefined) 00:10:00.255 Fetching value of define "__znver2__" : (undefined) 00:10:00.255 Fetching value of define "__znver3__" : (undefined) 00:10:00.255 Fetching value of define "__znver4__" : (undefined) 00:10:00.255 Library asan found: YES 00:10:00.255 Compiler for C supports arguments -Wno-format-truncation: YES 00:10:00.255 Message: lib/log: Defining dependency "log" 00:10:00.255 Message: lib/kvargs: Defining dependency "kvargs" 00:10:00.255 Message: lib/telemetry: Defining dependency "telemetry" 00:10:00.256 Library rt found: YES 00:10:00.256 Checking for function "getentropy" : NO 00:10:00.256 Message: lib/eal: Defining dependency "eal" 00:10:00.256 Message: lib/ring: Defining dependency "ring" 00:10:00.256 Message: lib/rcu: Defining dependency "rcu" 00:10:00.256 Message: lib/mempool: Defining dependency "mempool" 00:10:00.256 Message: lib/mbuf: Defining dependency "mbuf" 00:10:00.256 Fetching value of define "__PCLMUL__" : 1 (cached) 00:10:00.256 Fetching value of define "__AVX512F__" : 1 (cached) 00:10:00.256 Fetching value of define "__AVX512BW__" : 1 (cached) 00:10:00.256 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:10:00.256 Fetching value of define "__AVX512VL__" : 1 (cached) 00:10:00.256 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:10:00.256 Compiler for C supports arguments -mpclmul: YES 00:10:00.256 Compiler for C supports arguments -maes: YES 00:10:00.256 Compiler for C supports arguments -mavx512f: YES (cached) 00:10:00.256 Compiler for C supports arguments -mavx512bw: YES 00:10:00.256 Compiler for C supports arguments -mavx512dq: YES 00:10:00.256 Compiler for C supports arguments -mavx512vl: YES 00:10:00.256 Compiler for C supports arguments -mvpclmulqdq: YES 00:10:00.256 Compiler for C supports arguments -mavx2: YES 00:10:00.256 Compiler for C supports arguments -mavx: YES 00:10:00.256 Message: lib/net: Defining dependency "net" 00:10:00.256 Message: lib/meter: Defining dependency "meter" 00:10:00.256 Message: lib/ethdev: Defining dependency "ethdev" 00:10:00.256 Message: lib/pci: Defining dependency "pci" 00:10:00.256 Message: lib/cmdline: Defining dependency "cmdline" 00:10:00.256 Message: lib/hash: Defining dependency "hash" 00:10:00.256 Message: lib/timer: Defining dependency "timer" 00:10:00.256 Message: lib/compressdev: Defining dependency "compressdev" 00:10:00.256 Message: lib/cryptodev: Defining dependency "cryptodev" 00:10:00.256 Message: lib/dmadev: Defining dependency "dmadev" 00:10:00.256 Compiler for C supports arguments -Wno-cast-qual: YES 00:10:00.256 Message: lib/power: Defining dependency "power" 00:10:00.256 Message: lib/reorder: Defining dependency "reorder" 00:10:00.256 Message: lib/security: Defining dependency "security" 00:10:00.256 Has header "linux/userfaultfd.h" : YES 00:10:00.256 Has header "linux/vduse.h" : YES 00:10:00.256 Message: lib/vhost: Defining dependency "vhost" 00:10:00.256 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:10:00.256 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:10:00.256 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:10:00.256 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:10:00.256 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:10:00.256 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:10:00.256 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:10:00.256 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:10:00.256 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:10:00.256 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:10:00.256 Program doxygen found: YES (/usr/bin/doxygen) 00:10:00.256 Configuring doxy-api-html.conf using configuration 00:10:00.256 Configuring doxy-api-man.conf using configuration 00:10:00.256 Program mandb found: YES (/usr/bin/mandb) 00:10:00.256 Program sphinx-build found: NO 00:10:00.256 Configuring rte_build_config.h using configuration 00:10:00.256 Message: 00:10:00.256 ================= 00:10:00.256 Applications Enabled 00:10:00.256 ================= 00:10:00.256 00:10:00.256 apps: 00:10:00.256 00:10:00.256 00:10:00.256 Message: 00:10:00.256 ================= 00:10:00.256 Libraries Enabled 00:10:00.256 ================= 00:10:00.256 00:10:00.256 libs: 00:10:00.256 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:10:00.256 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:10:00.256 cryptodev, dmadev, power, reorder, security, vhost, 00:10:00.256 00:10:00.256 Message: 00:10:00.256 =============== 00:10:00.256 Drivers Enabled 00:10:00.256 =============== 00:10:00.256 00:10:00.256 common: 00:10:00.256 00:10:00.256 bus: 00:10:00.256 pci, vdev, 00:10:00.256 mempool: 00:10:00.256 ring, 00:10:00.256 dma: 00:10:00.256 00:10:00.256 net: 00:10:00.256 00:10:00.256 crypto: 00:10:00.256 00:10:00.256 compress: 00:10:00.256 00:10:00.256 vdpa: 00:10:00.256 00:10:00.256 00:10:00.256 Message: 00:10:00.256 ================= 00:10:00.256 Content Skipped 00:10:00.256 ================= 00:10:00.256 00:10:00.256 apps: 00:10:00.256 dumpcap: explicitly disabled via build config 00:10:00.256 graph: explicitly disabled via build config 00:10:00.256 pdump: explicitly disabled via build config 00:10:00.256 proc-info: explicitly disabled via build config 00:10:00.256 test-acl: explicitly disabled via build config 00:10:00.256 test-bbdev: explicitly disabled via build config 00:10:00.256 test-cmdline: explicitly disabled via build config 00:10:00.256 test-compress-perf: explicitly disabled via build config 00:10:00.256 test-crypto-perf: explicitly disabled via build config 00:10:00.256 test-dma-perf: explicitly disabled via build config 00:10:00.256 test-eventdev: explicitly disabled via build config 00:10:00.256 test-fib: explicitly disabled via build config 00:10:00.256 test-flow-perf: explicitly disabled via build config 00:10:00.256 test-gpudev: explicitly disabled via build config 00:10:00.256 test-mldev: explicitly disabled via build config 00:10:00.256 test-pipeline: explicitly disabled via build config 00:10:00.256 test-pmd: explicitly disabled via build config 00:10:00.256 test-regex: explicitly disabled via build config 00:10:00.256 test-sad: explicitly disabled via build config 00:10:00.256 test-security-perf: explicitly disabled via build config 00:10:00.256 00:10:00.256 libs: 00:10:00.256 argparse: explicitly disabled via build config 00:10:00.256 metrics: explicitly disabled via build config 00:10:00.256 acl: explicitly disabled via build config 00:10:00.256 bbdev: explicitly disabled via build config 00:10:00.256 bitratestats: explicitly disabled via build config 00:10:00.256 bpf: explicitly disabled via build config 00:10:00.256 cfgfile: explicitly disabled via build config 00:10:00.256 distributor: explicitly disabled via build config 00:10:00.256 efd: explicitly disabled via build config 00:10:00.256 eventdev: explicitly disabled via build config 00:10:00.256 dispatcher: explicitly disabled via build config 00:10:00.256 gpudev: explicitly disabled via build config 00:10:00.256 gro: explicitly disabled via build config 00:10:00.256 gso: explicitly disabled via build config 00:10:00.256 ip_frag: explicitly disabled via build config 00:10:00.256 jobstats: explicitly disabled via build config 00:10:00.256 latencystats: explicitly disabled via build config 00:10:00.256 lpm: explicitly disabled via build config 00:10:00.256 member: explicitly disabled via build config 00:10:00.256 pcapng: explicitly disabled via build config 00:10:00.256 rawdev: explicitly disabled via build config 00:10:00.256 regexdev: explicitly disabled via build config 00:10:00.256 mldev: explicitly disabled via build config 00:10:00.256 rib: explicitly disabled via build config 00:10:00.256 sched: explicitly disabled via build config 00:10:00.256 stack: explicitly disabled via build config 00:10:00.256 ipsec: explicitly disabled via build config 00:10:00.256 pdcp: explicitly disabled via build config 00:10:00.256 fib: explicitly disabled via build config 00:10:00.256 port: explicitly disabled via build config 00:10:00.256 pdump: explicitly disabled via build config 00:10:00.256 table: explicitly disabled via build config 00:10:00.256 pipeline: explicitly disabled via build config 00:10:00.256 graph: explicitly disabled via build config 00:10:00.256 node: explicitly disabled via build config 00:10:00.256 00:10:00.256 drivers: 00:10:00.256 common/cpt: not in enabled drivers build config 00:10:00.256 common/dpaax: not in enabled drivers build config 00:10:00.256 common/iavf: not in enabled drivers build config 00:10:00.256 common/idpf: not in enabled drivers build config 00:10:00.256 common/ionic: not in enabled drivers build config 00:10:00.256 common/mvep: not in enabled drivers build config 00:10:00.256 common/octeontx: not in enabled drivers build config 00:10:00.256 bus/auxiliary: not in enabled drivers build config 00:10:00.256 bus/cdx: not in enabled drivers build config 00:10:00.256 bus/dpaa: not in enabled drivers build config 00:10:00.256 bus/fslmc: not in enabled drivers build config 00:10:00.256 bus/ifpga: not in enabled drivers build config 00:10:00.256 bus/platform: not in enabled drivers build config 00:10:00.256 bus/uacce: not in enabled drivers build config 00:10:00.256 bus/vmbus: not in enabled drivers build config 00:10:00.256 common/cnxk: not in enabled drivers build config 00:10:00.256 common/mlx5: not in enabled drivers build config 00:10:00.256 common/nfp: not in enabled drivers build config 00:10:00.256 common/nitrox: not in enabled drivers build config 00:10:00.256 common/qat: not in enabled drivers build config 00:10:00.256 common/sfc_efx: not in enabled drivers build config 00:10:00.256 mempool/bucket: not in enabled drivers build config 00:10:00.256 mempool/cnxk: not in enabled drivers build config 00:10:00.256 mempool/dpaa: not in enabled drivers build config 00:10:00.256 mempool/dpaa2: not in enabled drivers build config 00:10:00.256 mempool/octeontx: not in enabled drivers build config 00:10:00.256 mempool/stack: not in enabled drivers build config 00:10:00.256 dma/cnxk: not in enabled drivers build config 00:10:00.256 dma/dpaa: not in enabled drivers build config 00:10:00.256 dma/dpaa2: not in enabled drivers build config 00:10:00.256 dma/hisilicon: not in enabled drivers build config 00:10:00.256 dma/idxd: not in enabled drivers build config 00:10:00.256 dma/ioat: not in enabled drivers build config 00:10:00.257 dma/skeleton: not in enabled drivers build config 00:10:00.257 net/af_packet: not in enabled drivers build config 00:10:00.257 net/af_xdp: not in enabled drivers build config 00:10:00.257 net/ark: not in enabled drivers build config 00:10:00.257 net/atlantic: not in enabled drivers build config 00:10:00.257 net/avp: not in enabled drivers build config 00:10:00.257 net/axgbe: not in enabled drivers build config 00:10:00.257 net/bnx2x: not in enabled drivers build config 00:10:00.257 net/bnxt: not in enabled drivers build config 00:10:00.257 net/bonding: not in enabled drivers build config 00:10:00.257 net/cnxk: not in enabled drivers build config 00:10:00.257 net/cpfl: not in enabled drivers build config 00:10:00.257 net/cxgbe: not in enabled drivers build config 00:10:00.257 net/dpaa: not in enabled drivers build config 00:10:00.257 net/dpaa2: not in enabled drivers build config 00:10:00.257 net/e1000: not in enabled drivers build config 00:10:00.257 net/ena: not in enabled drivers build config 00:10:00.257 net/enetc: not in enabled drivers build config 00:10:00.257 net/enetfec: not in enabled drivers build config 00:10:00.257 net/enic: not in enabled drivers build config 00:10:00.257 net/failsafe: not in enabled drivers build config 00:10:00.257 net/fm10k: not in enabled drivers build config 00:10:00.257 net/gve: not in enabled drivers build config 00:10:00.257 net/hinic: not in enabled drivers build config 00:10:00.257 net/hns3: not in enabled drivers build config 00:10:00.257 net/i40e: not in enabled drivers build config 00:10:00.257 net/iavf: not in enabled drivers build config 00:10:00.257 net/ice: not in enabled drivers build config 00:10:00.257 net/idpf: not in enabled drivers build config 00:10:00.257 net/igc: not in enabled drivers build config 00:10:00.257 net/ionic: not in enabled drivers build config 00:10:00.257 net/ipn3ke: not in enabled drivers build config 00:10:00.257 net/ixgbe: not in enabled drivers build config 00:10:00.257 net/mana: not in enabled drivers build config 00:10:00.257 net/memif: not in enabled drivers build config 00:10:00.257 net/mlx4: not in enabled drivers build config 00:10:00.257 net/mlx5: not in enabled drivers build config 00:10:00.257 net/mvneta: not in enabled drivers build config 00:10:00.257 net/mvpp2: not in enabled drivers build config 00:10:00.257 net/netvsc: not in enabled drivers build config 00:10:00.257 net/nfb: not in enabled drivers build config 00:10:00.257 net/nfp: not in enabled drivers build config 00:10:00.257 net/ngbe: not in enabled drivers build config 00:10:00.257 net/null: not in enabled drivers build config 00:10:00.257 net/octeontx: not in enabled drivers build config 00:10:00.257 net/octeon_ep: not in enabled drivers build config 00:10:00.257 net/pcap: not in enabled drivers build config 00:10:00.257 net/pfe: not in enabled drivers build config 00:10:00.257 net/qede: not in enabled drivers build config 00:10:00.257 net/ring: not in enabled drivers build config 00:10:00.257 net/sfc: not in enabled drivers build config 00:10:00.257 net/softnic: not in enabled drivers build config 00:10:00.257 net/tap: not in enabled drivers build config 00:10:00.257 net/thunderx: not in enabled drivers build config 00:10:00.257 net/txgbe: not in enabled drivers build config 00:10:00.257 net/vdev_netvsc: not in enabled drivers build config 00:10:00.257 net/vhost: not in enabled drivers build config 00:10:00.257 net/virtio: not in enabled drivers build config 00:10:00.257 net/vmxnet3: not in enabled drivers build config 00:10:00.257 raw/*: missing internal dependency, "rawdev" 00:10:00.257 crypto/armv8: not in enabled drivers build config 00:10:00.257 crypto/bcmfs: not in enabled drivers build config 00:10:00.257 crypto/caam_jr: not in enabled drivers build config 00:10:00.257 crypto/ccp: not in enabled drivers build config 00:10:00.257 crypto/cnxk: not in enabled drivers build config 00:10:00.257 crypto/dpaa_sec: not in enabled drivers build config 00:10:00.257 crypto/dpaa2_sec: not in enabled drivers build config 00:10:00.257 crypto/ipsec_mb: not in enabled drivers build config 00:10:00.257 crypto/mlx5: not in enabled drivers build config 00:10:00.257 crypto/mvsam: not in enabled drivers build config 00:10:00.257 crypto/nitrox: not in enabled drivers build config 00:10:00.257 crypto/null: not in enabled drivers build config 00:10:00.257 crypto/octeontx: not in enabled drivers build config 00:10:00.257 crypto/openssl: not in enabled drivers build config 00:10:00.257 crypto/scheduler: not in enabled drivers build config 00:10:00.257 crypto/uadk: not in enabled drivers build config 00:10:00.257 crypto/virtio: not in enabled drivers build config 00:10:00.257 compress/isal: not in enabled drivers build config 00:10:00.257 compress/mlx5: not in enabled drivers build config 00:10:00.257 compress/nitrox: not in enabled drivers build config 00:10:00.257 compress/octeontx: not in enabled drivers build config 00:10:00.257 compress/zlib: not in enabled drivers build config 00:10:00.257 regex/*: missing internal dependency, "regexdev" 00:10:00.257 ml/*: missing internal dependency, "mldev" 00:10:00.257 vdpa/ifc: not in enabled drivers build config 00:10:00.257 vdpa/mlx5: not in enabled drivers build config 00:10:00.257 vdpa/nfp: not in enabled drivers build config 00:10:00.257 vdpa/sfc: not in enabled drivers build config 00:10:00.257 event/*: missing internal dependency, "eventdev" 00:10:00.257 baseband/*: missing internal dependency, "bbdev" 00:10:00.257 gpu/*: missing internal dependency, "gpudev" 00:10:00.257 00:10:00.257 00:10:00.257 Build targets in project: 85 00:10:00.257 00:10:00.257 DPDK 24.03.0 00:10:00.257 00:10:00.257 User defined options 00:10:00.257 buildtype : debug 00:10:00.257 default_library : shared 00:10:00.257 libdir : lib 00:10:00.257 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:10:00.257 b_sanitize : address 00:10:00.257 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:10:00.257 c_link_args : 00:10:00.257 cpu_instruction_set: native 00:10:00.257 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:10:00.257 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:10:00.257 enable_docs : false 00:10:00.257 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:10:00.257 enable_kmods : false 00:10:00.257 max_lcores : 128 00:10:00.257 tests : false 00:10:00.257 00:10:00.257 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:10:00.257 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:10:00.257 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:10:00.257 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:10:00.257 [3/268] Linking static target lib/librte_kvargs.a 00:10:00.257 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:10:00.257 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:10:00.257 [6/268] Linking static target lib/librte_log.a 00:10:00.257 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:10:00.257 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:10:00.257 [9/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:10:00.257 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:10:00.257 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:10:00.257 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:10:00.257 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:10:00.257 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:10:00.257 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:10:00.516 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:10:00.516 [17/268] Linking static target lib/librte_telemetry.a 00:10:00.516 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:10:00.774 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:10:00.775 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:10:00.775 [21/268] Linking target lib/librte_log.so.24.1 00:10:00.775 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:10:00.775 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:10:00.775 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:10:01.034 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:10:01.034 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:10:01.034 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:10:01.034 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:10:01.034 [29/268] Linking target lib/librte_kvargs.so.24.1 00:10:01.034 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:10:01.297 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:10:01.297 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:10:01.297 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:10:01.297 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:10:01.297 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:10:01.564 [36/268] Linking target lib/librte_telemetry.so.24.1 00:10:01.565 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:10:01.565 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:10:01.565 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:10:01.565 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:10:01.565 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:10:01.565 [42/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:10:01.565 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:10:01.565 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:10:01.825 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:10:01.825 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:10:01.825 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:10:01.825 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:10:02.085 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:10:02.085 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:10:02.085 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:10:02.344 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:10:02.344 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:10:02.344 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:10:02.344 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:10:02.344 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:10:02.604 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:10:02.604 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:10:02.604 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:10:02.864 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:10:02.864 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:10:02.864 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:10:02.864 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:10:02.864 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:10:02.864 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:10:02.864 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:10:03.123 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:10:03.381 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:10:03.381 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:10:03.381 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:10:03.381 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:10:03.381 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:10:03.381 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:10:03.640 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:10:03.640 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:10:03.640 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:10:03.640 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:10:03.640 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:10:03.900 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:10:03.900 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:10:03.900 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:10:03.900 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:10:04.158 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:10:04.158 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:10:04.158 [85/268] Linking static target lib/librte_ring.a 00:10:04.158 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:10:04.418 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:10:04.418 [88/268] Linking static target lib/librte_eal.a 00:10:04.418 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:10:04.678 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:10:04.678 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:10:04.678 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:10:04.678 [93/268] Linking static target lib/librte_rcu.a 00:10:04.678 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:10:04.678 [95/268] Linking static target lib/librte_mempool.a 00:10:04.678 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:10:04.678 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:10:04.938 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:10:04.938 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:10:04.938 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:10:05.198 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:10:05.198 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.198 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:10:05.198 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:10:05.458 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:10:05.458 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:10:05.458 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:10:05.458 [108/268] Linking static target lib/librte_meter.a 00:10:05.458 [109/268] Linking static target lib/librte_net.a 00:10:05.718 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:10:05.718 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:10:05.718 [112/268] Linking static target lib/librte_mbuf.a 00:10:05.718 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:10:05.718 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.977 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:10:05.977 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.977 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:10:05.977 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:10:06.237 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:10:06.497 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:10:06.497 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:10:06.497 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:10:06.755 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:10:06.755 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:10:06.755 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:10:06.755 [126/268] Linking static target lib/librte_pci.a 00:10:07.014 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:10:07.014 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:10:07.014 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:10:07.014 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:07.014 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:10:07.273 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:10:07.273 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:10:07.273 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:10:07.273 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:10:07.273 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:10:07.273 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:10:07.273 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:10:07.273 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:10:07.273 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:10:07.532 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:10:07.532 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:10:07.532 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:10:07.532 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:10:07.532 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:10:07.532 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:10:07.532 [147/268] Linking static target lib/librte_cmdline.a 00:10:07.532 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:10:07.791 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:10:07.791 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:10:08.049 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:10:08.049 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:10:08.049 [153/268] Linking static target lib/librte_timer.a 00:10:08.309 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:10:08.309 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:10:08.309 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:10:08.309 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:10:08.309 [158/268] Linking static target lib/librte_compressdev.a 00:10:08.569 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:10:08.569 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:10:08.569 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:10:08.569 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:10:08.828 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:10:08.828 [164/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:10:08.828 [165/268] Linking static target lib/librte_ethdev.a 00:10:08.828 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:10:08.828 [167/268] Linking static target lib/librte_dmadev.a 00:10:09.087 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:10:09.087 [169/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:10:09.087 [170/268] Linking static target lib/librte_hash.a 00:10:09.087 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:10:09.087 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:10:09.346 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.346 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:10:09.346 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.606 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:10:09.606 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:10:09.606 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:10:09.865 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:10:09.865 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:09.865 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:10:09.865 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:10:10.123 [183/268] Linking static target lib/librte_cryptodev.a 00:10:10.123 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:10:10.123 [185/268] Linking static target lib/librte_power.a 00:10:10.382 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:10:10.382 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:10:10.382 [188/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.382 [189/268] Linking static target lib/librte_reorder.a 00:10:10.382 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:10:10.382 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:10:10.382 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:10:10.382 [193/268] Linking static target lib/librte_security.a 00:10:10.950 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.950 [195/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:10:10.950 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:10:11.208 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:10:11.208 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:10:11.208 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:10:11.467 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:10:11.467 [201/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:10:11.467 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:10:11.724 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:10:11.724 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:10:11.724 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:10:11.724 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:10:11.981 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:11.981 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:10:11.981 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:10:11.981 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:10:11.981 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:10:12.247 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:10:12.247 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:12.247 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:10:12.247 [215/268] Linking static target drivers/librte_bus_vdev.a 00:10:12.247 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:10:12.247 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:12.247 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:10:12.247 [219/268] Linking static target drivers/librte_bus_pci.a 00:10:12.247 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:10:12.506 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:10:12.506 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:12.506 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:10:12.506 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:12.506 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:10:12.506 [226/268] Linking static target drivers/librte_mempool_ring.a 00:10:13.074 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:10:14.011 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:10:15.390 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:10:15.390 [230/268] Linking target lib/librte_eal.so.24.1 00:10:15.390 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:10:15.390 [232/268] Linking target lib/librte_meter.so.24.1 00:10:15.390 [233/268] Linking target lib/librte_ring.so.24.1 00:10:15.390 [234/268] Linking target lib/librte_timer.so.24.1 00:10:15.390 [235/268] Linking target lib/librte_dmadev.so.24.1 00:10:15.390 [236/268] Linking target lib/librte_pci.so.24.1 00:10:15.390 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:10:15.648 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:10:15.648 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:10:15.648 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:10:15.648 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:10:15.648 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:10:15.648 [243/268] Linking target lib/librte_mempool.so.24.1 00:10:15.648 [244/268] Linking target lib/librte_rcu.so.24.1 00:10:15.648 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:10:15.907 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:10:15.907 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:10:15.907 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:10:15.907 [249/268] Linking target lib/librte_mbuf.so.24.1 00:10:16.165 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:10:16.165 [251/268] Linking target lib/librte_compressdev.so.24.1 00:10:16.165 [252/268] Linking target lib/librte_reorder.so.24.1 00:10:16.165 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:10:16.165 [254/268] Linking target lib/librte_net.so.24.1 00:10:16.165 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:10:16.425 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:10:16.425 [257/268] Linking target lib/librte_cmdline.so.24.1 00:10:16.425 [258/268] Linking target lib/librte_security.so.24.1 00:10:16.425 [259/268] Linking target lib/librte_hash.so.24.1 00:10:16.425 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:10:17.802 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:10:17.802 [262/268] Linking target lib/librte_ethdev.so.24.1 00:10:18.061 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:10:18.061 [264/268] Linking target lib/librte_power.so.24.1 00:10:18.320 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:10:18.320 [266/268] Linking static target lib/librte_vhost.a 00:10:20.853 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:10:20.853 [268/268] Linking target lib/librte_vhost.so.24.1 00:10:20.853 INFO: autodetecting backend as ninja 00:10:20.853 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:10:21.791 CC lib/log/log_flags.o 00:10:21.791 CC lib/log/log_deprecated.o 00:10:21.791 CC lib/log/log.o 00:10:21.791 CC lib/ut/ut.o 00:10:21.791 CC lib/ut_mock/mock.o 00:10:22.050 LIB libspdk_ut_mock.a 00:10:22.050 SO libspdk_ut_mock.so.6.0 00:10:22.050 LIB libspdk_log.a 00:10:22.050 LIB libspdk_ut.a 00:10:22.050 SO libspdk_log.so.7.0 00:10:22.050 SYMLINK libspdk_ut_mock.so 00:10:22.050 SO libspdk_ut.so.2.0 00:10:22.050 SYMLINK libspdk_log.so 00:10:22.050 SYMLINK libspdk_ut.so 00:10:22.309 CC lib/util/base64.o 00:10:22.309 CC lib/util/cpuset.o 00:10:22.309 CC lib/util/crc16.o 00:10:22.309 CC lib/util/crc32.o 00:10:22.309 CC lib/util/bit_array.o 00:10:22.309 CC lib/util/crc32c.o 00:10:22.309 CC lib/dma/dma.o 00:10:22.309 CXX lib/trace_parser/trace.o 00:10:22.309 CC lib/ioat/ioat.o 00:10:22.567 CC lib/vfio_user/host/vfio_user_pci.o 00:10:22.567 CC lib/util/crc32_ieee.o 00:10:22.567 CC lib/util/crc64.o 00:10:22.567 CC lib/util/dif.o 00:10:22.567 CC lib/util/fd.o 00:10:22.567 CC lib/util/fd_group.o 00:10:22.567 CC lib/util/file.o 00:10:22.567 LIB libspdk_dma.a 00:10:22.567 CC lib/util/hexlify.o 00:10:22.567 CC lib/util/iov.o 00:10:22.567 SO libspdk_dma.so.4.0 00:10:22.825 LIB libspdk_ioat.a 00:10:22.825 CC lib/util/math.o 00:10:22.825 SO libspdk_ioat.so.7.0 00:10:22.825 SYMLINK libspdk_dma.so 00:10:22.825 CC lib/vfio_user/host/vfio_user.o 00:10:22.825 CC lib/util/net.o 00:10:22.825 CC lib/util/pipe.o 00:10:22.825 CC lib/util/strerror_tls.o 00:10:22.825 SYMLINK libspdk_ioat.so 00:10:22.825 CC lib/util/string.o 00:10:22.825 CC lib/util/uuid.o 00:10:22.825 CC lib/util/xor.o 00:10:22.825 CC lib/util/zipf.o 00:10:23.082 LIB libspdk_vfio_user.a 00:10:23.082 SO libspdk_vfio_user.so.5.0 00:10:23.082 SYMLINK libspdk_vfio_user.so 00:10:23.342 LIB libspdk_util.a 00:10:23.342 SO libspdk_util.so.10.0 00:10:23.342 LIB libspdk_trace_parser.a 00:10:23.608 SYMLINK libspdk_util.so 00:10:23.608 SO libspdk_trace_parser.so.5.0 00:10:23.608 SYMLINK libspdk_trace_parser.so 00:10:23.608 CC lib/vmd/vmd.o 00:10:23.608 CC lib/rdma_utils/rdma_utils.o 00:10:23.608 CC lib/vmd/led.o 00:10:23.608 CC lib/env_dpdk/env.o 00:10:23.608 CC lib/env_dpdk/pci.o 00:10:23.608 CC lib/env_dpdk/memory.o 00:10:23.608 CC lib/idxd/idxd.o 00:10:23.608 CC lib/json/json_parse.o 00:10:23.608 CC lib/conf/conf.o 00:10:23.608 CC lib/rdma_provider/common.o 00:10:23.867 CC lib/env_dpdk/init.o 00:10:23.867 CC lib/rdma_provider/rdma_provider_verbs.o 00:10:23.867 LIB libspdk_conf.a 00:10:23.868 SO libspdk_conf.so.6.0 00:10:23.868 CC lib/json/json_util.o 00:10:24.126 LIB libspdk_rdma_utils.a 00:10:24.126 SO libspdk_rdma_utils.so.1.0 00:10:24.126 SYMLINK libspdk_conf.so 00:10:24.126 CC lib/idxd/idxd_user.o 00:10:24.126 CC lib/env_dpdk/threads.o 00:10:24.126 LIB libspdk_rdma_provider.a 00:10:24.126 SYMLINK libspdk_rdma_utils.so 00:10:24.126 CC lib/env_dpdk/pci_ioat.o 00:10:24.126 SO libspdk_rdma_provider.so.6.0 00:10:24.127 SYMLINK libspdk_rdma_provider.so 00:10:24.127 CC lib/json/json_write.o 00:10:24.127 CC lib/env_dpdk/pci_virtio.o 00:10:24.127 CC lib/env_dpdk/pci_vmd.o 00:10:24.127 CC lib/idxd/idxd_kernel.o 00:10:24.127 CC lib/env_dpdk/pci_idxd.o 00:10:24.386 CC lib/env_dpdk/pci_event.o 00:10:24.386 CC lib/env_dpdk/sigbus_handler.o 00:10:24.386 CC lib/env_dpdk/pci_dpdk.o 00:10:24.386 CC lib/env_dpdk/pci_dpdk_2207.o 00:10:24.386 CC lib/env_dpdk/pci_dpdk_2211.o 00:10:24.386 LIB libspdk_idxd.a 00:10:24.386 SO libspdk_idxd.so.12.0 00:10:24.386 LIB libspdk_vmd.a 00:10:24.386 SO libspdk_vmd.so.6.0 00:10:24.645 LIB libspdk_json.a 00:10:24.645 SYMLINK libspdk_idxd.so 00:10:24.645 SO libspdk_json.so.6.0 00:10:24.645 SYMLINK libspdk_vmd.so 00:10:24.645 SYMLINK libspdk_json.so 00:10:24.904 CC lib/jsonrpc/jsonrpc_server.o 00:10:24.904 CC lib/jsonrpc/jsonrpc_client.o 00:10:24.904 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:10:24.904 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:10:25.473 LIB libspdk_jsonrpc.a 00:10:25.473 SO libspdk_jsonrpc.so.6.0 00:10:25.473 SYMLINK libspdk_jsonrpc.so 00:10:25.473 LIB libspdk_env_dpdk.a 00:10:25.733 SO libspdk_env_dpdk.so.15.0 00:10:25.733 SYMLINK libspdk_env_dpdk.so 00:10:25.991 CC lib/rpc/rpc.o 00:10:26.250 LIB libspdk_rpc.a 00:10:26.250 SO libspdk_rpc.so.6.0 00:10:26.250 SYMLINK libspdk_rpc.so 00:10:26.508 CC lib/keyring/keyring.o 00:10:26.508 CC lib/keyring/keyring_rpc.o 00:10:26.508 CC lib/trace/trace.o 00:10:26.508 CC lib/trace/trace_rpc.o 00:10:26.508 CC lib/trace/trace_flags.o 00:10:26.508 CC lib/notify/notify_rpc.o 00:10:26.508 CC lib/notify/notify.o 00:10:26.768 LIB libspdk_notify.a 00:10:26.768 LIB libspdk_keyring.a 00:10:26.768 SO libspdk_notify.so.6.0 00:10:26.768 SO libspdk_keyring.so.1.0 00:10:26.768 SYMLINK libspdk_notify.so 00:10:26.768 LIB libspdk_trace.a 00:10:26.768 SYMLINK libspdk_keyring.so 00:10:27.027 SO libspdk_trace.so.10.0 00:10:27.027 SYMLINK libspdk_trace.so 00:10:27.286 CC lib/sock/sock.o 00:10:27.286 CC lib/sock/sock_rpc.o 00:10:27.286 CC lib/thread/thread.o 00:10:27.286 CC lib/thread/iobuf.o 00:10:27.853 LIB libspdk_sock.a 00:10:27.853 SO libspdk_sock.so.10.0 00:10:28.111 SYMLINK libspdk_sock.so 00:10:28.370 CC lib/nvme/nvme_fabric.o 00:10:28.370 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:28.370 CC lib/nvme/nvme_ctrlr.o 00:10:28.370 CC lib/nvme/nvme_ns_cmd.o 00:10:28.370 CC lib/nvme/nvme_ns.o 00:10:28.370 CC lib/nvme/nvme_pcie_common.o 00:10:28.370 CC lib/nvme/nvme_pcie.o 00:10:28.370 CC lib/nvme/nvme_qpair.o 00:10:28.370 CC lib/nvme/nvme.o 00:10:28.938 CC lib/nvme/nvme_quirks.o 00:10:29.197 CC lib/nvme/nvme_transport.o 00:10:29.197 LIB libspdk_thread.a 00:10:29.197 CC lib/nvme/nvme_discovery.o 00:10:29.197 SO libspdk_thread.so.10.1 00:10:29.197 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:29.197 SYMLINK libspdk_thread.so 00:10:29.197 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:29.197 CC lib/nvme/nvme_tcp.o 00:10:29.455 CC lib/nvme/nvme_opal.o 00:10:29.455 CC lib/nvme/nvme_io_msg.o 00:10:29.455 CC lib/nvme/nvme_poll_group.o 00:10:29.724 CC lib/nvme/nvme_zns.o 00:10:29.724 CC lib/nvme/nvme_stubs.o 00:10:29.724 CC lib/nvme/nvme_auth.o 00:10:29.982 CC lib/nvme/nvme_cuse.o 00:10:29.982 CC lib/nvme/nvme_rdma.o 00:10:30.240 CC lib/blob/blobstore.o 00:10:30.240 CC lib/accel/accel.o 00:10:30.240 CC lib/accel/accel_rpc.o 00:10:30.240 CC lib/init/json_config.o 00:10:30.240 CC lib/init/subsystem.o 00:10:30.498 CC lib/init/subsystem_rpc.o 00:10:30.498 CC lib/blob/request.o 00:10:30.498 CC lib/blob/zeroes.o 00:10:30.756 CC lib/init/rpc.o 00:10:30.756 CC lib/blob/blob_bs_dev.o 00:10:30.756 CC lib/accel/accel_sw.o 00:10:31.014 LIB libspdk_init.a 00:10:31.014 SO libspdk_init.so.5.0 00:10:31.014 CC lib/virtio/virtio.o 00:10:31.014 CC lib/virtio/virtio_vhost_user.o 00:10:31.014 CC lib/virtio/virtio_vfio_user.o 00:10:31.014 SYMLINK libspdk_init.so 00:10:31.014 CC lib/virtio/virtio_pci.o 00:10:31.273 CC lib/event/reactor.o 00:10:31.273 CC lib/event/log_rpc.o 00:10:31.273 CC lib/event/app.o 00:10:31.273 CC lib/event/app_rpc.o 00:10:31.273 CC lib/event/scheduler_static.o 00:10:31.273 LIB libspdk_virtio.a 00:10:31.273 LIB libspdk_accel.a 00:10:31.530 SO libspdk_virtio.so.7.0 00:10:31.530 SO libspdk_accel.so.16.0 00:10:31.530 SYMLINK libspdk_virtio.so 00:10:31.530 SYMLINK libspdk_accel.so 00:10:31.530 LIB libspdk_nvme.a 00:10:31.787 SO libspdk_nvme.so.13.1 00:10:31.787 LIB libspdk_event.a 00:10:31.787 CC lib/bdev/bdev_rpc.o 00:10:31.787 CC lib/bdev/bdev.o 00:10:31.787 CC lib/bdev/bdev_zone.o 00:10:31.787 CC lib/bdev/part.o 00:10:31.787 CC lib/bdev/scsi_nvme.o 00:10:31.787 SO libspdk_event.so.14.0 00:10:32.046 SYMLINK libspdk_event.so 00:10:32.046 SYMLINK libspdk_nvme.so 00:10:33.952 LIB libspdk_blob.a 00:10:34.212 SO libspdk_blob.so.11.0 00:10:34.212 SYMLINK libspdk_blob.so 00:10:34.780 CC lib/blobfs/blobfs.o 00:10:34.780 CC lib/blobfs/tree.o 00:10:34.780 CC lib/lvol/lvol.o 00:10:35.039 LIB libspdk_bdev.a 00:10:35.039 SO libspdk_bdev.so.16.0 00:10:35.297 SYMLINK libspdk_bdev.so 00:10:35.556 CC lib/nbd/nbd.o 00:10:35.556 CC lib/nbd/nbd_rpc.o 00:10:35.556 CC lib/ftl/ftl_core.o 00:10:35.556 CC lib/ftl/ftl_init.o 00:10:35.556 CC lib/ftl/ftl_layout.o 00:10:35.556 CC lib/ublk/ublk.o 00:10:35.556 CC lib/nvmf/ctrlr.o 00:10:35.556 CC lib/scsi/dev.o 00:10:35.556 CC lib/scsi/lun.o 00:10:35.556 LIB libspdk_blobfs.a 00:10:35.815 CC lib/ftl/ftl_debug.o 00:10:35.815 SO libspdk_blobfs.so.10.0 00:10:35.815 LIB libspdk_lvol.a 00:10:35.815 SYMLINK libspdk_blobfs.so 00:10:35.815 SO libspdk_lvol.so.10.0 00:10:35.815 CC lib/scsi/port.o 00:10:35.815 CC lib/ftl/ftl_io.o 00:10:35.815 SYMLINK libspdk_lvol.so 00:10:35.815 CC lib/ftl/ftl_sb.o 00:10:35.815 CC lib/ftl/ftl_l2p.o 00:10:35.815 LIB libspdk_nbd.a 00:10:35.815 CC lib/ftl/ftl_l2p_flat.o 00:10:35.815 CC lib/scsi/scsi.o 00:10:35.815 CC lib/scsi/scsi_bdev.o 00:10:35.815 SO libspdk_nbd.so.7.0 00:10:36.073 CC lib/scsi/scsi_pr.o 00:10:36.074 SYMLINK libspdk_nbd.so 00:10:36.074 CC lib/ublk/ublk_rpc.o 00:10:36.074 CC lib/ftl/ftl_nv_cache.o 00:10:36.074 CC lib/nvmf/ctrlr_discovery.o 00:10:36.074 CC lib/ftl/ftl_band.o 00:10:36.074 CC lib/scsi/scsi_rpc.o 00:10:36.074 CC lib/scsi/task.o 00:10:36.074 CC lib/nvmf/ctrlr_bdev.o 00:10:36.333 LIB libspdk_ublk.a 00:10:36.333 CC lib/nvmf/subsystem.o 00:10:36.333 SO libspdk_ublk.so.3.0 00:10:36.333 SYMLINK libspdk_ublk.so 00:10:36.333 CC lib/ftl/ftl_band_ops.o 00:10:36.333 CC lib/ftl/ftl_writer.o 00:10:36.333 CC lib/ftl/ftl_rq.o 00:10:36.591 CC lib/nvmf/nvmf.o 00:10:36.591 LIB libspdk_scsi.a 00:10:36.591 CC lib/ftl/ftl_reloc.o 00:10:36.591 SO libspdk_scsi.so.9.0 00:10:36.591 CC lib/nvmf/nvmf_rpc.o 00:10:36.591 CC lib/nvmf/transport.o 00:10:36.591 CC lib/nvmf/tcp.o 00:10:36.591 SYMLINK libspdk_scsi.so 00:10:36.591 CC lib/nvmf/stubs.o 00:10:36.849 CC lib/nvmf/mdns_server.o 00:10:37.107 CC lib/nvmf/rdma.o 00:10:37.107 CC lib/nvmf/auth.o 00:10:37.365 CC lib/ftl/ftl_l2p_cache.o 00:10:37.365 CC lib/ftl/ftl_p2l.o 00:10:37.623 CC lib/iscsi/conn.o 00:10:37.623 CC lib/iscsi/init_grp.o 00:10:37.623 CC lib/ftl/mngt/ftl_mngt.o 00:10:37.623 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:37.623 CC lib/vhost/vhost.o 00:10:37.881 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:37.881 CC lib/iscsi/iscsi.o 00:10:37.881 CC lib/iscsi/md5.o 00:10:37.881 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:37.881 CC lib/vhost/vhost_rpc.o 00:10:38.139 CC lib/vhost/vhost_scsi.o 00:10:38.139 CC lib/iscsi/param.o 00:10:38.139 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:38.139 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:38.397 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:38.397 CC lib/vhost/vhost_blk.o 00:10:38.397 CC lib/vhost/rte_vhost_user.o 00:10:38.397 CC lib/iscsi/portal_grp.o 00:10:38.397 CC lib/iscsi/tgt_node.o 00:10:38.655 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:38.655 CC lib/iscsi/iscsi_subsystem.o 00:10:38.655 CC lib/iscsi/iscsi_rpc.o 00:10:38.655 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:38.655 CC lib/iscsi/task.o 00:10:38.914 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:38.914 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:39.173 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:39.173 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:39.173 CC lib/ftl/utils/ftl_conf.o 00:10:39.173 CC lib/ftl/utils/ftl_md.o 00:10:39.173 CC lib/ftl/utils/ftl_mempool.o 00:10:39.173 CC lib/ftl/utils/ftl_bitmap.o 00:10:39.173 CC lib/ftl/utils/ftl_property.o 00:10:39.432 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:39.432 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:39.432 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:39.432 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:39.432 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:39.691 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:39.691 LIB libspdk_vhost.a 00:10:39.691 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:10:39.691 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:39.691 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:39.691 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:39.691 SO libspdk_vhost.so.8.0 00:10:39.691 LIB libspdk_iscsi.a 00:10:39.691 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:39.691 LIB libspdk_nvmf.a 00:10:39.691 CC lib/ftl/base/ftl_base_dev.o 00:10:39.691 CC lib/ftl/base/ftl_base_bdev.o 00:10:39.691 SYMLINK libspdk_vhost.so 00:10:39.949 CC lib/ftl/ftl_trace.o 00:10:39.949 SO libspdk_iscsi.so.8.0 00:10:39.949 SO libspdk_nvmf.so.19.0 00:10:39.949 SYMLINK libspdk_iscsi.so 00:10:40.207 LIB libspdk_ftl.a 00:10:40.207 SYMLINK libspdk_nvmf.so 00:10:40.207 SO libspdk_ftl.so.9.0 00:10:40.773 SYMLINK libspdk_ftl.so 00:10:41.032 CC module/env_dpdk/env_dpdk_rpc.o 00:10:41.032 CC module/blob/bdev/blob_bdev.o 00:10:41.032 CC module/accel/iaa/accel_iaa.o 00:10:41.032 CC module/accel/error/accel_error.o 00:10:41.032 CC module/accel/dsa/accel_dsa.o 00:10:41.032 CC module/keyring/file/keyring.o 00:10:41.032 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:41.032 CC module/accel/ioat/accel_ioat.o 00:10:41.032 CC module/keyring/linux/keyring.o 00:10:41.032 CC module/sock/posix/posix.o 00:10:41.032 LIB libspdk_env_dpdk_rpc.a 00:10:41.292 SO libspdk_env_dpdk_rpc.so.6.0 00:10:41.292 SYMLINK libspdk_env_dpdk_rpc.so 00:10:41.292 CC module/keyring/linux/keyring_rpc.o 00:10:41.292 CC module/keyring/file/keyring_rpc.o 00:10:41.292 CC module/accel/error/accel_error_rpc.o 00:10:41.292 CC module/accel/ioat/accel_ioat_rpc.o 00:10:41.292 CC module/accel/dsa/accel_dsa_rpc.o 00:10:41.292 CC module/accel/iaa/accel_iaa_rpc.o 00:10:41.292 LIB libspdk_scheduler_dynamic.a 00:10:41.292 SO libspdk_scheduler_dynamic.so.4.0 00:10:41.292 LIB libspdk_keyring_linux.a 00:10:41.292 LIB libspdk_keyring_file.a 00:10:41.292 LIB libspdk_accel_error.a 00:10:41.292 SYMLINK libspdk_scheduler_dynamic.so 00:10:41.292 SO libspdk_keyring_linux.so.1.0 00:10:41.550 LIB libspdk_accel_ioat.a 00:10:41.550 SO libspdk_keyring_file.so.1.0 00:10:41.550 SO libspdk_accel_error.so.2.0 00:10:41.550 LIB libspdk_accel_iaa.a 00:10:41.550 LIB libspdk_blob_bdev.a 00:10:41.550 SO libspdk_accel_ioat.so.6.0 00:10:41.551 LIB libspdk_accel_dsa.a 00:10:41.551 SYMLINK libspdk_keyring_linux.so 00:10:41.551 SO libspdk_accel_iaa.so.3.0 00:10:41.551 SO libspdk_blob_bdev.so.11.0 00:10:41.551 SYMLINK libspdk_accel_error.so 00:10:41.551 SYMLINK libspdk_keyring_file.so 00:10:41.551 SO libspdk_accel_dsa.so.5.0 00:10:41.551 SYMLINK libspdk_accel_ioat.so 00:10:41.551 SYMLINK libspdk_blob_bdev.so 00:10:41.551 SYMLINK libspdk_accel_iaa.so 00:10:41.551 SYMLINK libspdk_accel_dsa.so 00:10:41.551 CC module/scheduler/gscheduler/gscheduler.o 00:10:41.551 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:41.808 LIB libspdk_scheduler_gscheduler.a 00:10:41.809 SO libspdk_scheduler_gscheduler.so.4.0 00:10:41.809 LIB libspdk_scheduler_dpdk_governor.a 00:10:41.809 CC module/bdev/delay/vbdev_delay.o 00:10:41.809 CC module/bdev/error/vbdev_error.o 00:10:41.809 CC module/bdev/null/bdev_null.o 00:10:41.809 CC module/blobfs/bdev/blobfs_bdev.o 00:10:41.809 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:41.809 CC module/bdev/gpt/gpt.o 00:10:41.809 SYMLINK libspdk_scheduler_gscheduler.so 00:10:41.809 CC module/bdev/gpt/vbdev_gpt.o 00:10:41.809 CC module/bdev/malloc/bdev_malloc.o 00:10:41.809 CC module/bdev/lvol/vbdev_lvol.o 00:10:42.067 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:42.067 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:42.067 LIB libspdk_sock_posix.a 00:10:42.067 SO libspdk_sock_posix.so.6.0 00:10:42.067 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:42.067 CC module/bdev/error/vbdev_error_rpc.o 00:10:42.067 SYMLINK libspdk_sock_posix.so 00:10:42.067 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:42.067 CC module/bdev/null/bdev_null_rpc.o 00:10:42.067 LIB libspdk_bdev_gpt.a 00:10:42.326 SO libspdk_bdev_gpt.so.6.0 00:10:42.326 LIB libspdk_blobfs_bdev.a 00:10:42.326 LIB libspdk_bdev_error.a 00:10:42.326 SO libspdk_blobfs_bdev.so.6.0 00:10:42.326 SO libspdk_bdev_error.so.6.0 00:10:42.326 SYMLINK libspdk_bdev_gpt.so 00:10:42.326 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:42.326 LIB libspdk_bdev_delay.a 00:10:42.326 CC module/bdev/nvme/bdev_nvme.o 00:10:42.326 SYMLINK libspdk_bdev_error.so 00:10:42.326 SO libspdk_bdev_delay.so.6.0 00:10:42.326 LIB libspdk_bdev_null.a 00:10:42.326 SYMLINK libspdk_blobfs_bdev.so 00:10:42.326 SO libspdk_bdev_null.so.6.0 00:10:42.326 SYMLINK libspdk_bdev_delay.so 00:10:42.326 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:42.585 SYMLINK libspdk_bdev_null.so 00:10:42.585 CC module/bdev/passthru/vbdev_passthru.o 00:10:42.585 LIB libspdk_bdev_malloc.a 00:10:42.585 LIB libspdk_bdev_lvol.a 00:10:42.585 CC module/bdev/raid/bdev_raid.o 00:10:42.585 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:42.585 CC module/bdev/split/vbdev_split.o 00:10:42.585 SO libspdk_bdev_malloc.so.6.0 00:10:42.585 SO libspdk_bdev_lvol.so.6.0 00:10:42.585 CC module/bdev/xnvme/bdev_xnvme.o 00:10:42.585 SYMLINK libspdk_bdev_malloc.so 00:10:42.585 CC module/bdev/nvme/nvme_rpc.o 00:10:42.585 SYMLINK libspdk_bdev_lvol.so 00:10:42.585 CC module/bdev/aio/bdev_aio.o 00:10:42.585 CC module/bdev/nvme/bdev_mdns_client.o 00:10:42.858 CC module/bdev/split/vbdev_split_rpc.o 00:10:42.858 CC module/bdev/raid/bdev_raid_rpc.o 00:10:42.858 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:10:42.858 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:42.858 CC module/bdev/nvme/vbdev_opal.o 00:10:42.858 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:42.858 LIB libspdk_bdev_split.a 00:10:43.177 SO libspdk_bdev_split.so.6.0 00:10:43.177 CC module/bdev/aio/bdev_aio_rpc.o 00:10:43.177 LIB libspdk_bdev_xnvme.a 00:10:43.177 LIB libspdk_bdev_passthru.a 00:10:43.177 SO libspdk_bdev_passthru.so.6.0 00:10:43.177 SO libspdk_bdev_xnvme.so.3.0 00:10:43.177 SYMLINK libspdk_bdev_split.so 00:10:43.177 LIB libspdk_bdev_zone_block.a 00:10:43.177 SYMLINK libspdk_bdev_xnvme.so 00:10:43.177 SYMLINK libspdk_bdev_passthru.so 00:10:43.177 CC module/bdev/raid/bdev_raid_sb.o 00:10:43.177 SO libspdk_bdev_zone_block.so.6.0 00:10:43.177 LIB libspdk_bdev_aio.a 00:10:43.177 CC module/bdev/raid/raid0.o 00:10:43.177 SO libspdk_bdev_aio.so.6.0 00:10:43.177 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:43.177 SYMLINK libspdk_bdev_zone_block.so 00:10:43.177 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:43.177 CC module/bdev/iscsi/bdev_iscsi.o 00:10:43.177 SYMLINK libspdk_bdev_aio.so 00:10:43.457 CC module/bdev/ftl/bdev_ftl.o 00:10:43.457 CC module/bdev/raid/raid1.o 00:10:43.457 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:43.457 CC module/bdev/raid/concat.o 00:10:43.457 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:43.457 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:43.457 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:43.717 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:43.717 LIB libspdk_bdev_ftl.a 00:10:43.717 LIB libspdk_bdev_raid.a 00:10:43.717 SO libspdk_bdev_ftl.so.6.0 00:10:43.717 SO libspdk_bdev_raid.so.6.0 00:10:43.717 LIB libspdk_bdev_iscsi.a 00:10:43.975 SYMLINK libspdk_bdev_ftl.so 00:10:43.975 SO libspdk_bdev_iscsi.so.6.0 00:10:43.975 SYMLINK libspdk_bdev_raid.so 00:10:43.975 SYMLINK libspdk_bdev_iscsi.so 00:10:43.975 LIB libspdk_bdev_virtio.a 00:10:43.975 SO libspdk_bdev_virtio.so.6.0 00:10:44.235 SYMLINK libspdk_bdev_virtio.so 00:10:45.173 LIB libspdk_bdev_nvme.a 00:10:45.173 SO libspdk_bdev_nvme.so.7.0 00:10:45.173 SYMLINK libspdk_bdev_nvme.so 00:10:45.742 CC module/event/subsystems/sock/sock.o 00:10:45.742 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:45.742 CC module/event/subsystems/scheduler/scheduler.o 00:10:45.742 CC module/event/subsystems/iobuf/iobuf.o 00:10:45.742 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:45.742 CC module/event/subsystems/keyring/keyring.o 00:10:46.001 CC module/event/subsystems/vmd/vmd.o 00:10:46.001 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:46.001 LIB libspdk_event_vhost_blk.a 00:10:46.001 LIB libspdk_event_sock.a 00:10:46.001 LIB libspdk_event_scheduler.a 00:10:46.001 LIB libspdk_event_keyring.a 00:10:46.001 SO libspdk_event_vhost_blk.so.3.0 00:10:46.001 SO libspdk_event_sock.so.5.0 00:10:46.001 SO libspdk_event_scheduler.so.4.0 00:10:46.001 LIB libspdk_event_vmd.a 00:10:46.001 LIB libspdk_event_iobuf.a 00:10:46.001 SO libspdk_event_keyring.so.1.0 00:10:46.001 SO libspdk_event_vmd.so.6.0 00:10:46.001 SYMLINK libspdk_event_vhost_blk.so 00:10:46.001 SO libspdk_event_iobuf.so.3.0 00:10:46.001 SYMLINK libspdk_event_scheduler.so 00:10:46.001 SYMLINK libspdk_event_sock.so 00:10:46.001 SYMLINK libspdk_event_keyring.so 00:10:46.259 SYMLINK libspdk_event_vmd.so 00:10:46.259 SYMLINK libspdk_event_iobuf.so 00:10:46.518 CC module/event/subsystems/accel/accel.o 00:10:46.518 LIB libspdk_event_accel.a 00:10:46.778 SO libspdk_event_accel.so.6.0 00:10:46.778 SYMLINK libspdk_event_accel.so 00:10:47.037 CC module/event/subsystems/bdev/bdev.o 00:10:47.295 LIB libspdk_event_bdev.a 00:10:47.296 SO libspdk_event_bdev.so.6.0 00:10:47.296 SYMLINK libspdk_event_bdev.so 00:10:47.553 CC module/event/subsystems/nbd/nbd.o 00:10:47.553 CC module/event/subsystems/ublk/ublk.o 00:10:47.812 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:47.812 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:47.812 CC module/event/subsystems/scsi/scsi.o 00:10:47.812 LIB libspdk_event_nbd.a 00:10:47.812 LIB libspdk_event_ublk.a 00:10:47.812 SO libspdk_event_nbd.so.6.0 00:10:47.812 SO libspdk_event_ublk.so.3.0 00:10:47.812 LIB libspdk_event_scsi.a 00:10:47.812 SYMLINK libspdk_event_nbd.so 00:10:47.812 SYMLINK libspdk_event_ublk.so 00:10:47.812 SO libspdk_event_scsi.so.6.0 00:10:48.070 LIB libspdk_event_nvmf.a 00:10:48.070 SYMLINK libspdk_event_scsi.so 00:10:48.070 SO libspdk_event_nvmf.so.6.0 00:10:48.070 SYMLINK libspdk_event_nvmf.so 00:10:48.328 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:48.328 CC module/event/subsystems/iscsi/iscsi.o 00:10:48.587 LIB libspdk_event_vhost_scsi.a 00:10:48.587 SO libspdk_event_vhost_scsi.so.3.0 00:10:48.587 LIB libspdk_event_iscsi.a 00:10:48.587 SO libspdk_event_iscsi.so.6.0 00:10:48.587 SYMLINK libspdk_event_vhost_scsi.so 00:10:48.587 SYMLINK libspdk_event_iscsi.so 00:10:48.846 SO libspdk.so.6.0 00:10:48.846 SYMLINK libspdk.so 00:10:49.104 CC app/trace_record/trace_record.o 00:10:49.104 CXX app/trace/trace.o 00:10:49.104 CC app/spdk_lspci/spdk_lspci.o 00:10:49.104 CC app/spdk_nvme_identify/identify.o 00:10:49.104 CC app/spdk_nvme_perf/perf.o 00:10:49.104 CC app/nvmf_tgt/nvmf_main.o 00:10:49.104 CC app/iscsi_tgt/iscsi_tgt.o 00:10:49.104 CC examples/util/zipf/zipf.o 00:10:49.104 CC test/thread/poller_perf/poller_perf.o 00:10:49.104 CC app/spdk_tgt/spdk_tgt.o 00:10:49.104 LINK spdk_lspci 00:10:49.362 LINK nvmf_tgt 00:10:49.362 LINK poller_perf 00:10:49.362 LINK zipf 00:10:49.362 LINK iscsi_tgt 00:10:49.362 LINK spdk_trace_record 00:10:49.362 LINK spdk_tgt 00:10:49.619 CC app/spdk_nvme_discover/discovery_aer.o 00:10:49.620 LINK spdk_trace 00:10:49.620 CC app/spdk_top/spdk_top.o 00:10:49.620 CC examples/ioat/perf/perf.o 00:10:49.620 CC test/dma/test_dma/test_dma.o 00:10:49.878 LINK spdk_nvme_discover 00:10:49.878 CC app/spdk_dd/spdk_dd.o 00:10:49.878 CC examples/vmd/lsvmd/lsvmd.o 00:10:49.878 CC examples/vmd/led/led.o 00:10:49.878 CC app/fio/nvme/fio_plugin.o 00:10:49.878 LINK ioat_perf 00:10:49.878 LINK lsvmd 00:10:50.136 LINK led 00:10:50.136 CC app/vhost/vhost.o 00:10:50.136 LINK spdk_nvme_identify 00:10:50.136 CC examples/ioat/verify/verify.o 00:10:50.136 LINK test_dma 00:10:50.136 LINK spdk_dd 00:10:50.136 LINK spdk_nvme_perf 00:10:50.394 CC app/fio/bdev/fio_plugin.o 00:10:50.394 LINK vhost 00:10:50.394 CC examples/idxd/perf/perf.o 00:10:50.394 LINK verify 00:10:50.394 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:50.652 LINK spdk_nvme 00:10:50.652 CC examples/thread/thread/thread_ex.o 00:10:50.652 CC test/blobfs/mkfs/mkfs.o 00:10:50.652 CC test/app/bdev_svc/bdev_svc.o 00:10:50.652 LINK interrupt_tgt 00:10:50.652 TEST_HEADER include/spdk/accel.h 00:10:50.652 TEST_HEADER include/spdk/accel_module.h 00:10:50.652 TEST_HEADER include/spdk/assert.h 00:10:50.652 TEST_HEADER include/spdk/barrier.h 00:10:50.652 TEST_HEADER include/spdk/base64.h 00:10:50.652 TEST_HEADER include/spdk/bdev.h 00:10:50.652 TEST_HEADER include/spdk/bdev_module.h 00:10:50.652 TEST_HEADER include/spdk/bdev_zone.h 00:10:50.652 LINK spdk_top 00:10:50.652 TEST_HEADER include/spdk/bit_array.h 00:10:50.652 TEST_HEADER include/spdk/bit_pool.h 00:10:50.652 TEST_HEADER include/spdk/blob_bdev.h 00:10:50.652 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:50.652 TEST_HEADER include/spdk/blobfs.h 00:10:50.652 TEST_HEADER include/spdk/blob.h 00:10:50.652 TEST_HEADER include/spdk/conf.h 00:10:50.652 TEST_HEADER include/spdk/config.h 00:10:50.652 TEST_HEADER include/spdk/cpuset.h 00:10:50.652 TEST_HEADER include/spdk/crc16.h 00:10:50.652 TEST_HEADER include/spdk/crc32.h 00:10:50.652 TEST_HEADER include/spdk/crc64.h 00:10:50.652 TEST_HEADER include/spdk/dif.h 00:10:50.652 TEST_HEADER include/spdk/dma.h 00:10:50.652 TEST_HEADER include/spdk/endian.h 00:10:50.652 TEST_HEADER include/spdk/env_dpdk.h 00:10:50.652 TEST_HEADER include/spdk/env.h 00:10:50.652 LINK idxd_perf 00:10:50.652 CC examples/sock/hello_world/hello_sock.o 00:10:50.652 TEST_HEADER include/spdk/event.h 00:10:50.652 TEST_HEADER include/spdk/fd_group.h 00:10:50.652 TEST_HEADER include/spdk/fd.h 00:10:50.652 TEST_HEADER include/spdk/file.h 00:10:50.652 TEST_HEADER include/spdk/ftl.h 00:10:50.652 TEST_HEADER include/spdk/gpt_spec.h 00:10:50.652 TEST_HEADER include/spdk/hexlify.h 00:10:50.652 TEST_HEADER include/spdk/histogram_data.h 00:10:50.652 TEST_HEADER include/spdk/idxd.h 00:10:50.652 TEST_HEADER include/spdk/idxd_spec.h 00:10:50.652 TEST_HEADER include/spdk/init.h 00:10:50.652 TEST_HEADER include/spdk/ioat.h 00:10:50.652 TEST_HEADER include/spdk/ioat_spec.h 00:10:50.652 TEST_HEADER include/spdk/iscsi_spec.h 00:10:50.652 TEST_HEADER include/spdk/json.h 00:10:50.652 TEST_HEADER include/spdk/jsonrpc.h 00:10:50.652 TEST_HEADER include/spdk/keyring.h 00:10:50.652 TEST_HEADER include/spdk/keyring_module.h 00:10:50.652 TEST_HEADER include/spdk/likely.h 00:10:50.652 TEST_HEADER include/spdk/log.h 00:10:50.652 TEST_HEADER include/spdk/lvol.h 00:10:50.911 TEST_HEADER include/spdk/memory.h 00:10:50.911 TEST_HEADER include/spdk/mmio.h 00:10:50.911 TEST_HEADER include/spdk/nbd.h 00:10:50.911 TEST_HEADER include/spdk/net.h 00:10:50.911 TEST_HEADER include/spdk/notify.h 00:10:50.911 TEST_HEADER include/spdk/nvme.h 00:10:50.911 TEST_HEADER include/spdk/nvme_intel.h 00:10:50.911 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:50.911 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:50.911 TEST_HEADER include/spdk/nvme_spec.h 00:10:50.911 TEST_HEADER include/spdk/nvme_zns.h 00:10:50.911 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:50.911 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:50.911 TEST_HEADER include/spdk/nvmf.h 00:10:50.911 TEST_HEADER include/spdk/nvmf_spec.h 00:10:50.911 TEST_HEADER include/spdk/nvmf_transport.h 00:10:50.911 TEST_HEADER include/spdk/opal.h 00:10:50.911 TEST_HEADER include/spdk/opal_spec.h 00:10:50.911 TEST_HEADER include/spdk/pci_ids.h 00:10:50.911 TEST_HEADER include/spdk/pipe.h 00:10:50.911 TEST_HEADER include/spdk/queue.h 00:10:50.911 TEST_HEADER include/spdk/reduce.h 00:10:50.911 TEST_HEADER include/spdk/rpc.h 00:10:50.911 TEST_HEADER include/spdk/scheduler.h 00:10:50.911 TEST_HEADER include/spdk/scsi.h 00:10:50.911 TEST_HEADER include/spdk/scsi_spec.h 00:10:50.911 TEST_HEADER include/spdk/sock.h 00:10:50.911 TEST_HEADER include/spdk/stdinc.h 00:10:50.911 TEST_HEADER include/spdk/string.h 00:10:50.911 TEST_HEADER include/spdk/thread.h 00:10:50.911 TEST_HEADER include/spdk/trace.h 00:10:50.911 LINK bdev_svc 00:10:50.911 TEST_HEADER include/spdk/trace_parser.h 00:10:50.911 LINK spdk_bdev 00:10:50.911 TEST_HEADER include/spdk/tree.h 00:10:50.911 TEST_HEADER include/spdk/ublk.h 00:10:50.911 LINK mkfs 00:10:50.911 TEST_HEADER include/spdk/util.h 00:10:50.911 TEST_HEADER include/spdk/uuid.h 00:10:50.911 TEST_HEADER include/spdk/version.h 00:10:50.911 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:50.911 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:50.911 TEST_HEADER include/spdk/vhost.h 00:10:50.911 TEST_HEADER include/spdk/vmd.h 00:10:50.911 TEST_HEADER include/spdk/xor.h 00:10:50.911 TEST_HEADER include/spdk/zipf.h 00:10:50.911 CXX test/cpp_headers/accel.o 00:10:50.911 CXX test/cpp_headers/accel_module.o 00:10:50.911 LINK thread 00:10:50.911 CC test/env/mem_callbacks/mem_callbacks.o 00:10:50.911 CXX test/cpp_headers/assert.o 00:10:50.911 CC test/env/vtophys/vtophys.o 00:10:51.170 LINK hello_sock 00:10:51.170 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:51.170 CXX test/cpp_headers/barrier.o 00:10:51.170 LINK vtophys 00:10:51.170 CC test/env/memory/memory_ut.o 00:10:51.170 CC test/app/histogram_perf/histogram_perf.o 00:10:51.170 CC test/app/jsoncat/jsoncat.o 00:10:51.170 LINK env_dpdk_post_init 00:10:51.170 CXX test/cpp_headers/base64.o 00:10:51.170 CC test/app/stub/stub.o 00:10:51.170 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:51.170 CXX test/cpp_headers/bdev.o 00:10:51.429 LINK histogram_perf 00:10:51.429 LINK jsoncat 00:10:51.429 CC examples/accel/perf/accel_perf.o 00:10:51.429 LINK stub 00:10:51.429 CXX test/cpp_headers/bdev_module.o 00:10:51.429 LINK mem_callbacks 00:10:51.429 CXX test/cpp_headers/bdev_zone.o 00:10:51.429 CXX test/cpp_headers/bit_array.o 00:10:51.687 CC examples/blob/hello_world/hello_blob.o 00:10:51.687 CC examples/nvme/hello_world/hello_world.o 00:10:51.687 CXX test/cpp_headers/bit_pool.o 00:10:51.687 CC test/env/pci/pci_ut.o 00:10:51.687 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:51.687 LINK nvme_fuzz 00:10:51.687 CC examples/blob/cli/blobcli.o 00:10:51.945 CC test/event/event_perf/event_perf.o 00:10:51.945 CXX test/cpp_headers/blob_bdev.o 00:10:51.945 LINK hello_blob 00:10:51.945 LINK hello_world 00:10:51.945 CXX test/cpp_headers/blobfs_bdev.o 00:10:51.945 LINK accel_perf 00:10:51.945 LINK event_perf 00:10:52.204 CC test/event/reactor/reactor.o 00:10:52.204 CXX test/cpp_headers/blobfs.o 00:10:52.204 CC examples/nvme/reconnect/reconnect.o 00:10:52.204 LINK pci_ut 00:10:52.204 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:52.204 CXX test/cpp_headers/blob.o 00:10:52.204 CC examples/nvme/arbitration/arbitration.o 00:10:52.204 LINK reactor 00:10:52.462 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:52.462 LINK memory_ut 00:10:52.462 CXX test/cpp_headers/conf.o 00:10:52.462 LINK blobcli 00:10:52.462 CC test/event/reactor_perf/reactor_perf.o 00:10:52.462 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:52.462 CC test/event/app_repeat/app_repeat.o 00:10:52.462 LINK reconnect 00:10:52.462 CXX test/cpp_headers/config.o 00:10:52.462 CXX test/cpp_headers/cpuset.o 00:10:52.719 LINK arbitration 00:10:52.719 CXX test/cpp_headers/crc16.o 00:10:52.719 CXX test/cpp_headers/crc32.o 00:10:52.719 LINK reactor_perf 00:10:52.719 LINK app_repeat 00:10:52.719 LINK nvme_manage 00:10:52.719 CXX test/cpp_headers/crc64.o 00:10:52.719 CXX test/cpp_headers/dif.o 00:10:52.719 CC examples/nvme/hotplug/hotplug.o 00:10:52.719 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:52.719 CC examples/nvme/abort/abort.o 00:10:52.981 CXX test/cpp_headers/dma.o 00:10:52.981 CC test/event/scheduler/scheduler.o 00:10:52.981 LINK vhost_fuzz 00:10:52.981 CXX test/cpp_headers/endian.o 00:10:52.981 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:52.981 LINK cmb_copy 00:10:52.981 CXX test/cpp_headers/env_dpdk.o 00:10:52.981 LINK hotplug 00:10:53.238 LINK scheduler 00:10:53.238 CC examples/bdev/hello_world/hello_bdev.o 00:10:53.238 CXX test/cpp_headers/env.o 00:10:53.238 LINK pmr_persistence 00:10:53.238 CXX test/cpp_headers/event.o 00:10:53.238 LINK abort 00:10:53.238 CC examples/bdev/bdevperf/bdevperf.o 00:10:53.238 CXX test/cpp_headers/fd_group.o 00:10:53.495 CC test/lvol/esnap/esnap.o 00:10:53.495 CXX test/cpp_headers/fd.o 00:10:53.495 CXX test/cpp_headers/file.o 00:10:53.495 CXX test/cpp_headers/ftl.o 00:10:53.495 LINK hello_bdev 00:10:53.495 CXX test/cpp_headers/gpt_spec.o 00:10:53.495 CC test/rpc_client/rpc_client_test.o 00:10:53.495 CC test/nvme/aer/aer.o 00:10:53.495 CXX test/cpp_headers/hexlify.o 00:10:53.495 CXX test/cpp_headers/histogram_data.o 00:10:54.448 CC test/nvme/reset/reset.o 00:10:54.448 CXX test/cpp_headers/idxd.o 00:10:54.448 CXX test/cpp_headers/idxd_spec.o 00:10:54.448 LINK rpc_client_test 00:10:54.448 LINK iscsi_fuzz 00:10:54.448 CXX test/cpp_headers/init.o 00:10:54.448 CXX test/cpp_headers/ioat.o 00:10:54.448 LINK aer 00:10:54.448 CC test/nvme/sgl/sgl.o 00:10:54.448 LINK reset 00:10:54.448 CXX test/cpp_headers/ioat_spec.o 00:10:54.448 CC test/accel/dif/dif.o 00:10:54.448 CC test/nvme/e2edp/nvme_dp.o 00:10:54.448 CC test/nvme/overhead/overhead.o 00:10:54.448 CXX test/cpp_headers/iscsi_spec.o 00:10:54.448 CXX test/cpp_headers/json.o 00:10:54.448 CC test/nvme/err_injection/err_injection.o 00:10:54.448 LINK bdevperf 00:10:54.448 LINK sgl 00:10:54.448 CC test/nvme/startup/startup.o 00:10:54.448 CXX test/cpp_headers/jsonrpc.o 00:10:54.448 LINK err_injection 00:10:54.448 LINK nvme_dp 00:10:54.448 LINK startup 00:10:54.448 LINK overhead 00:10:54.448 CC test/nvme/reserve/reserve.o 00:10:54.448 CXX test/cpp_headers/keyring.o 00:10:54.448 CC test/nvme/simple_copy/simple_copy.o 00:10:54.448 CXX test/cpp_headers/keyring_module.o 00:10:54.448 CC examples/nvmf/nvmf/nvmf.o 00:10:54.706 LINK dif 00:10:54.706 CXX test/cpp_headers/likely.o 00:10:54.706 LINK reserve 00:10:54.706 CXX test/cpp_headers/log.o 00:10:54.706 CC test/nvme/connect_stress/connect_stress.o 00:10:54.706 CC test/nvme/boot_partition/boot_partition.o 00:10:54.706 CC test/nvme/compliance/nvme_compliance.o 00:10:54.706 LINK simple_copy 00:10:54.706 CC test/nvme/fused_ordering/fused_ordering.o 00:10:54.706 CXX test/cpp_headers/lvol.o 00:10:54.964 CXX test/cpp_headers/memory.o 00:10:54.964 LINK boot_partition 00:10:54.964 LINK connect_stress 00:10:54.964 CXX test/cpp_headers/mmio.o 00:10:54.964 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:54.964 LINK nvmf 00:10:54.964 CXX test/cpp_headers/nbd.o 00:10:54.964 LINK fused_ordering 00:10:54.964 CXX test/cpp_headers/net.o 00:10:54.964 CXX test/cpp_headers/notify.o 00:10:54.964 LINK nvme_compliance 00:10:55.222 LINK doorbell_aers 00:10:55.222 CC test/nvme/fdp/fdp.o 00:10:55.222 CC test/nvme/cuse/cuse.o 00:10:55.222 CXX test/cpp_headers/nvme.o 00:10:55.222 CXX test/cpp_headers/nvme_intel.o 00:10:55.222 CXX test/cpp_headers/nvme_ocssd.o 00:10:55.222 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:55.222 CC test/bdev/bdevio/bdevio.o 00:10:55.222 CXX test/cpp_headers/nvme_spec.o 00:10:55.222 CXX test/cpp_headers/nvme_zns.o 00:10:55.222 CXX test/cpp_headers/nvmf_cmd.o 00:10:55.481 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:55.481 CXX test/cpp_headers/nvmf.o 00:10:55.481 CXX test/cpp_headers/nvmf_spec.o 00:10:55.481 CXX test/cpp_headers/nvmf_transport.o 00:10:55.481 CXX test/cpp_headers/opal.o 00:10:55.481 LINK fdp 00:10:55.481 CXX test/cpp_headers/opal_spec.o 00:10:55.481 CXX test/cpp_headers/pci_ids.o 00:10:55.481 CXX test/cpp_headers/pipe.o 00:10:55.481 CXX test/cpp_headers/queue.o 00:10:55.739 CXX test/cpp_headers/reduce.o 00:10:55.739 CXX test/cpp_headers/rpc.o 00:10:55.739 LINK bdevio 00:10:55.739 CXX test/cpp_headers/scheduler.o 00:10:55.739 CXX test/cpp_headers/scsi.o 00:10:55.739 CXX test/cpp_headers/scsi_spec.o 00:10:55.739 CXX test/cpp_headers/sock.o 00:10:55.739 CXX test/cpp_headers/stdinc.o 00:10:55.739 CXX test/cpp_headers/string.o 00:10:55.739 CXX test/cpp_headers/thread.o 00:10:55.739 CXX test/cpp_headers/trace.o 00:10:55.739 CXX test/cpp_headers/trace_parser.o 00:10:55.739 CXX test/cpp_headers/tree.o 00:10:55.739 CXX test/cpp_headers/ublk.o 00:10:55.996 CXX test/cpp_headers/util.o 00:10:55.996 CXX test/cpp_headers/uuid.o 00:10:55.996 CXX test/cpp_headers/version.o 00:10:55.996 CXX test/cpp_headers/vfio_user_pci.o 00:10:55.996 CXX test/cpp_headers/vfio_user_spec.o 00:10:55.996 CXX test/cpp_headers/vhost.o 00:10:55.996 CXX test/cpp_headers/vmd.o 00:10:55.996 CXX test/cpp_headers/xor.o 00:10:55.996 CXX test/cpp_headers/zipf.o 00:10:56.573 LINK cuse 00:10:59.870 LINK esnap 00:11:00.130 00:11:00.130 real 1m13.695s 00:11:00.130 user 6m44.030s 00:11:00.130 sys 1m36.733s 00:11:00.130 09:25:00 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:11:00.130 09:25:00 make -- common/autotest_common.sh@10 -- $ set +x 00:11:00.130 ************************************ 00:11:00.130 END TEST make 00:11:00.130 ************************************ 00:11:00.130 09:25:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:11:00.130 09:25:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:11:00.130 09:25:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:11:00.130 09:25:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:00.130 09:25:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:11:00.130 09:25:00 -- pm/common@44 -- $ pid=5394 00:11:00.130 09:25:00 -- pm/common@50 -- $ kill -TERM 5394 00:11:00.130 09:25:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:11:00.130 09:25:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:11:00.130 09:25:00 -- pm/common@44 -- $ pid=5396 00:11:00.130 09:25:00 -- pm/common@50 -- $ kill -TERM 5396 00:11:00.130 09:25:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:00.130 09:25:00 -- nvmf/common.sh@7 -- # uname -s 00:11:00.130 09:25:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.130 09:25:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.130 09:25:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.130 09:25:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.130 09:25:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.130 09:25:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.130 09:25:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.130 09:25:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.130 09:25:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.130 09:25:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.130 09:25:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3d617ecc-e11c-4945-98e2-f53b121c839e 00:11:00.130 09:25:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=3d617ecc-e11c-4945-98e2-f53b121c839e 00:11:00.130 09:25:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.130 09:25:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.130 09:25:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:00.130 09:25:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.130 09:25:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:00.130 09:25:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.130 09:25:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.130 09:25:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.130 09:25:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.130 09:25:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.130 09:25:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.130 09:25:00 -- paths/export.sh@5 -- # export PATH 00:11:00.130 09:25:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.130 09:25:00 -- nvmf/common.sh@47 -- # : 0 00:11:00.130 09:25:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.130 09:25:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.130 09:25:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.130 09:25:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.130 09:25:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.130 09:25:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.130 09:25:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.130 09:25:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.130 09:25:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:11:00.130 09:25:00 -- spdk/autotest.sh@32 -- # uname -s 00:11:00.130 09:25:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:11:00.130 09:25:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:11:00.130 09:25:00 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:00.130 09:25:00 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:11:00.130 09:25:00 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:11:00.130 09:25:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:11:00.390 09:25:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:11:00.390 09:25:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:11:00.390 09:25:00 -- spdk/autotest.sh@48 -- # udevadm_pid=53932 00:11:00.390 09:25:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:11:00.390 09:25:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:11:00.390 09:25:00 -- pm/common@17 -- # local monitor 00:11:00.390 09:25:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:00.390 09:25:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:11:00.390 09:25:00 -- pm/common@25 -- # sleep 1 00:11:00.390 09:25:00 -- pm/common@21 -- # date +%s 00:11:00.390 09:25:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721899500 00:11:00.390 09:25:00 -- pm/common@21 -- # date +%s 00:11:00.390 09:25:00 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721899500 00:11:00.390 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721899500_collect-cpu-load.pm.log 00:11:00.390 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721899500_collect-vmstat.pm.log 00:11:01.329 09:25:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:11:01.329 09:25:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:11:01.329 09:25:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.329 09:25:01 -- common/autotest_common.sh@10 -- # set +x 00:11:01.329 09:25:01 -- spdk/autotest.sh@59 -- # create_test_list 00:11:01.329 09:25:01 -- common/autotest_common.sh@748 -- # xtrace_disable 00:11:01.329 09:25:01 -- common/autotest_common.sh@10 -- # set +x 00:11:01.329 09:25:01 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:11:01.329 09:25:01 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:11:01.329 09:25:01 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:11:01.329 09:25:01 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:11:01.329 09:25:01 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:11:01.329 09:25:01 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:11:01.329 09:25:01 -- common/autotest_common.sh@1455 -- # uname 00:11:01.329 09:25:01 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:11:01.329 09:25:01 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:11:01.329 09:25:01 -- common/autotest_common.sh@1475 -- # uname 00:11:01.329 09:25:01 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:11:01.329 09:25:01 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:11:01.329 09:25:01 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:11:01.329 09:25:01 -- spdk/autotest.sh@72 -- # hash lcov 00:11:01.329 09:25:01 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:11:01.329 09:25:01 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:11:01.329 --rc lcov_branch_coverage=1 00:11:01.329 --rc lcov_function_coverage=1 00:11:01.329 --rc genhtml_branch_coverage=1 00:11:01.329 --rc genhtml_function_coverage=1 00:11:01.329 --rc genhtml_legend=1 00:11:01.329 --rc geninfo_all_blocks=1 00:11:01.329 ' 00:11:01.329 09:25:01 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:11:01.329 --rc lcov_branch_coverage=1 00:11:01.329 --rc lcov_function_coverage=1 00:11:01.329 --rc genhtml_branch_coverage=1 00:11:01.329 --rc genhtml_function_coverage=1 00:11:01.329 --rc genhtml_legend=1 00:11:01.329 --rc geninfo_all_blocks=1 00:11:01.329 ' 00:11:01.329 09:25:01 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:11:01.329 --rc lcov_branch_coverage=1 00:11:01.329 --rc lcov_function_coverage=1 00:11:01.329 --rc genhtml_branch_coverage=1 00:11:01.329 --rc genhtml_function_coverage=1 00:11:01.329 --rc genhtml_legend=1 00:11:01.329 --rc geninfo_all_blocks=1 00:11:01.329 --no-external' 00:11:01.329 09:25:01 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:11:01.329 --rc lcov_branch_coverage=1 00:11:01.329 --rc lcov_function_coverage=1 00:11:01.329 --rc genhtml_branch_coverage=1 00:11:01.329 --rc genhtml_function_coverage=1 00:11:01.329 --rc genhtml_legend=1 00:11:01.329 --rc geninfo_all_blocks=1 00:11:01.329 --no-external' 00:11:01.329 09:25:01 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:11:01.589 lcov: LCOV version 1.14 00:11:01.589 09:25:01 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:11:16.514 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:11:16.514 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:11:28.724 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:11:28.724 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:11:28.725 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:11:28.725 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:11:32.015 09:25:32 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:11:32.015 09:25:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:32.015 09:25:32 -- common/autotest_common.sh@10 -- # set +x 00:11:32.015 09:25:32 -- spdk/autotest.sh@91 -- # rm -f 00:11:32.015 09:25:32 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:32.583 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.155 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:33.155 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:33.155 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:11:33.155 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:11:33.155 09:25:33 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:33.155 09:25:33 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:33.155 09:25:33 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:33.155 09:25:33 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:33.155 09:25:33 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:11:33.155 09:25:33 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:33.155 09:25:33 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:33.155 09:25:33 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:33.155 09:25:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:33.155 09:25:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:33.156 09:25:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:33.156 09:25:33 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:33.156 09:25:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:33.415 No valid GPT data, bailing 00:11:33.415 09:25:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:33.415 09:25:33 -- scripts/common.sh@391 -- # pt= 00:11:33.415 09:25:33 -- scripts/common.sh@392 -- # return 1 00:11:33.415 09:25:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:33.415 1+0 records in 00:11:33.415 1+0 records out 00:11:33.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179854 s, 58.3 MB/s 00:11:33.415 09:25:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:33.415 09:25:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:33.415 09:25:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:11:33.415 09:25:33 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:11:33.415 09:25:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:33.415 No valid GPT data, bailing 00:11:33.415 09:25:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:33.415 09:25:33 -- scripts/common.sh@391 -- # pt= 00:11:33.415 09:25:33 -- scripts/common.sh@392 -- # return 1 00:11:33.415 09:25:33 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:33.415 1+0 records in 00:11:33.415 1+0 records out 00:11:33.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041336 s, 254 MB/s 00:11:33.415 09:25:33 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:33.415 09:25:33 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:33.415 09:25:33 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:11:33.415 09:25:33 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:11:33.415 09:25:33 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:33.415 No valid GPT data, bailing 00:11:33.415 09:25:33 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:33.415 09:25:34 -- scripts/common.sh@391 -- # pt= 00:11:33.415 09:25:34 -- scripts/common.sh@392 -- # return 1 00:11:33.415 09:25:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:33.415 1+0 records in 00:11:33.415 1+0 records out 00:11:33.415 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399882 s, 262 MB/s 00:11:33.415 09:25:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:33.415 09:25:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:33.415 09:25:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:11:33.415 09:25:34 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:11:33.415 09:25:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:33.700 No valid GPT data, bailing 00:11:33.700 09:25:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:33.700 09:25:34 -- scripts/common.sh@391 -- # pt= 00:11:33.700 09:25:34 -- scripts/common.sh@392 -- # return 1 00:11:33.700 09:25:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:33.700 1+0 records in 00:11:33.700 1+0 records out 00:11:33.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567789 s, 185 MB/s 00:11:33.700 09:25:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:33.700 09:25:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:33.700 09:25:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:11:33.700 09:25:34 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:11:33.700 09:25:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:11:33.700 No valid GPT data, bailing 00:11:33.700 09:25:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:11:33.700 09:25:34 -- scripts/common.sh@391 -- # pt= 00:11:33.700 09:25:34 -- scripts/common.sh@392 -- # return 1 00:11:33.700 09:25:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:11:33.700 1+0 records in 00:11:33.700 1+0 records out 00:11:33.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612325 s, 171 MB/s 00:11:33.700 09:25:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:33.700 09:25:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:33.701 09:25:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:11:33.701 09:25:34 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:11:33.701 09:25:34 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:11:33.701 No valid GPT data, bailing 00:11:33.701 09:25:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:11:33.701 09:25:34 -- scripts/common.sh@391 -- # pt= 00:11:33.701 09:25:34 -- scripts/common.sh@392 -- # return 1 00:11:33.701 09:25:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:11:33.701 1+0 records in 00:11:33.701 1+0 records out 00:11:33.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604115 s, 174 MB/s 00:11:33.701 09:25:34 -- spdk/autotest.sh@118 -- # sync 00:11:33.701 09:25:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:33.701 09:25:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:33.701 09:25:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:36.234 09:25:36 -- spdk/autotest.sh@124 -- # uname -s 00:11:36.234 09:25:36 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:36.234 09:25:36 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:36.234 09:25:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:36.234 09:25:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.234 09:25:36 -- common/autotest_common.sh@10 -- # set +x 00:11:36.234 ************************************ 00:11:36.234 START TEST setup.sh 00:11:36.234 ************************************ 00:11:36.234 09:25:36 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:36.492 * Looking for test storage... 00:11:36.492 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:36.492 09:25:36 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:36.492 09:25:36 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:36.493 09:25:36 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:36.493 09:25:36 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:36.493 09:25:36 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:36.493 09:25:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:36.493 ************************************ 00:11:36.493 START TEST acl 00:11:36.493 ************************************ 00:11:36.493 09:25:36 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:36.493 * Looking for test storage... 00:11:36.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:36.493 09:25:37 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:11:36.493 09:25:37 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:11:36.493 09:25:37 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:36.493 09:25:37 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:36.493 09:25:37 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:36.493 09:25:37 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:36.493 09:25:37 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:36.493 09:25:37 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:36.493 09:25:37 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:37.873 09:25:38 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:11:37.873 09:25:38 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:11:37.873 09:25:38 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:37.873 09:25:38 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:11:37.873 09:25:38 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:11:37.873 09:25:38 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:38.441 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:11:38.441 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:38.441 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.010 Hugepages 00:11:39.010 node hugesize free / total 00:11:39.010 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:39.010 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:39.010 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.010 00:11:39.010 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:39.010 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:39.010 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:39.010 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:39.270 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.530 09:25:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:11:39.530 09:25:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:39.530 09:25:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:11:39.530 09:25:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:39.530 09:25:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:39.530 09:25:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:11:39.530 09:25:40 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:11:39.530 09:25:40 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:39.530 09:25:40 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.530 09:25:40 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:39.530 ************************************ 00:11:39.530 START TEST denied 00:11:39.530 ************************************ 00:11:39.530 09:25:40 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:11:39.530 09:25:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:11:39.530 09:25:40 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:11:39.530 09:25:40 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:11:39.530 09:25:40 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:11:39.530 09:25:40 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:43.733 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:43.733 09:25:43 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:50.315 00:11:50.315 real 0m9.675s 00:11:50.315 user 0m0.983s 00:11:50.315 sys 0m5.783s 00:11:50.315 09:25:49 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.315 09:25:49 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:11:50.315 ************************************ 00:11:50.315 END TEST denied 00:11:50.315 ************************************ 00:11:50.315 09:25:49 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:50.315 09:25:49 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:50.315 09:25:49 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.315 09:25:49 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:50.315 ************************************ 00:11:50.315 START TEST allowed 00:11:50.315 ************************************ 00:11:50.315 09:25:49 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:11:50.315 09:25:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:11:50.315 09:25:49 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:11:50.315 09:25:49 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:11:50.315 09:25:49 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:11:50.315 09:25:49 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:50.575 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:50.575 09:25:51 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:51.954 00:11:51.954 real 0m2.635s 00:11:51.954 user 0m1.080s 00:11:51.954 sys 0m1.555s 00:11:51.954 09:25:52 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.954 09:25:52 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 ************************************ 00:11:51.954 END TEST allowed 00:11:51.954 ************************************ 00:11:51.954 00:11:51.954 real 0m15.593s 00:11:51.954 user 0m3.385s 00:11:51.954 sys 0m9.328s 00:11:51.954 09:25:52 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:51.954 09:25:52 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 ************************************ 00:11:51.954 END TEST acl 00:11:51.954 ************************************ 00:11:51.954 09:25:52 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:51.954 09:25:52 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:51.954 09:25:52 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.954 09:25:52 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:51.954 ************************************ 00:11:51.954 START TEST hugepages 00:11:51.954 ************************************ 00:11:51.954 09:25:52 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:52.215 * Looking for test storage... 00:11:52.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 5819076 kB' 'MemAvailable: 7420800 kB' 'Buffers: 2436 kB' 'Cached: 1814956 kB' 'SwapCached: 0 kB' 'Active: 444956 kB' 'Inactive: 1474912 kB' 'Active(anon): 112988 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474912 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 104052 kB' 'Mapped: 49048 kB' 'Shmem: 10512 kB' 'KReclaimable: 63568 kB' 'Slab: 139584 kB' 'SReclaimable: 63568 kB' 'SUnreclaim: 76016 kB' 'KernelStack: 6340 kB' 'PageTables: 3916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 337612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.215 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:52.216 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:52.217 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:52.217 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:52.217 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:52.217 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:52.217 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:52.217 09:25:52 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:52.217 09:25:52 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:52.217 09:25:52 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.217 09:25:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:52.217 ************************************ 00:11:52.217 START TEST default_setup 00:11:52.217 ************************************ 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:11:52.217 09:25:52 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:52.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:53.729 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.729 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.729 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.729 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7912688 kB' 'MemAvailable: 9514176 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464584 kB' 'Inactive: 1474932 kB' 'Active(anon): 132616 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123688 kB' 'Mapped: 48788 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138972 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75920 kB' 'KernelStack: 6304 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.729 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.730 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7912688 kB' 'MemAvailable: 9514176 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464164 kB' 'Inactive: 1474932 kB' 'Active(anon): 132196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138976 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75924 kB' 'KernelStack: 6336 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.731 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.732 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7912688 kB' 'MemAvailable: 9514176 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464164 kB' 'Inactive: 1474932 kB' 'Active(anon): 132196 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123612 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138976 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75924 kB' 'KernelStack: 6336 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.733 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.734 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:53.735 nr_hugepages=1024 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:53.735 resv_hugepages=0 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:53.735 surplus_hugepages=0 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:53.735 anon_hugepages=0 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7912688 kB' 'MemAvailable: 9514176 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464280 kB' 'Inactive: 1474932 kB' 'Active(anon): 132312 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 123424 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138976 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75924 kB' 'KernelStack: 6288 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.735 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.736 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7912688 kB' 'MemUsed: 4329280 kB' 'SwapCached: 0 kB' 'Active: 464352 kB' 'Inactive: 1474932 kB' 'Active(anon): 132384 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474932 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 1817380 kB' 'Mapped: 48672 kB' 'AnonPages: 123484 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63052 kB' 'Slab: 138976 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75924 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.737 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.997 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:53.998 node0=1024 expecting 1024 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:53.998 00:11:53.998 real 0m1.630s 00:11:53.998 user 0m0.654s 00:11:53.998 sys 0m0.967s 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.998 09:25:54 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:11:53.998 ************************************ 00:11:53.998 END TEST default_setup 00:11:53.998 ************************************ 00:11:53.998 09:25:54 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:53.998 09:25:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:53.998 09:25:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.998 09:25:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:53.998 ************************************ 00:11:53.998 START TEST per_node_1G_alloc 00:11:53.998 ************************************ 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:53.998 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:53.999 09:25:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:54.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:54.576 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.576 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.576 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.576 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962796 kB' 'MemAvailable: 10564292 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464592 kB' 'Inactive: 1474940 kB' 'Active(anon): 132624 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 128 kB' 'Writeback: 0 kB' 'AnonPages: 123780 kB' 'Mapped: 48812 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138892 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75840 kB' 'KernelStack: 6336 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.576 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962812 kB' 'MemAvailable: 10564308 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464380 kB' 'Inactive: 1474940 kB' 'Active(anon): 132412 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123516 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138900 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75848 kB' 'KernelStack: 6320 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.577 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962812 kB' 'MemAvailable: 10564308 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464384 kB' 'Inactive: 1474940 kB' 'Active(anon): 132416 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123516 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138900 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75848 kB' 'KernelStack: 6320 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.578 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.843 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:54.844 nr_hugepages=512 00:11:54.844 resv_hugepages=0 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:54.844 surplus_hugepages=0 00:11:54.844 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:54.844 anon_hugepages=0 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962812 kB' 'MemAvailable: 10564308 kB' 'Buffers: 2436 kB' 'Cached: 1814944 kB' 'SwapCached: 0 kB' 'Active: 464620 kB' 'Inactive: 1474940 kB' 'Active(anon): 132652 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 123752 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138896 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75844 kB' 'KernelStack: 6320 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.845 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.846 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8962812 kB' 'MemUsed: 3279156 kB' 'SwapCached: 0 kB' 'Active: 464504 kB' 'Inactive: 1474940 kB' 'Active(anon): 132536 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 1817380 kB' 'Mapped: 48672 kB' 'AnonPages: 123628 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63052 kB' 'Slab: 138896 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.847 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:54.848 node0=512 expecting 512 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:54.848 00:11:54.848 real 0m0.844s 00:11:54.848 user 0m0.377s 00:11:54.848 sys 0m0.515s 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.848 09:25:55 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:54.848 ************************************ 00:11:54.848 END TEST per_node_1G_alloc 00:11:54.848 ************************************ 00:11:54.848 09:25:55 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:54.848 09:25:55 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:54.848 09:25:55 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.848 09:25:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:54.849 ************************************ 00:11:54.849 START TEST even_2G_alloc 00:11:54.849 ************************************ 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:54.849 09:25:55 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:55.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:55.418 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.418 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.418 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.418 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7917276 kB' 'MemAvailable: 9518776 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 464332 kB' 'Inactive: 1474944 kB' 'Active(anon): 132364 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123688 kB' 'Mapped: 48856 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 138996 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75944 kB' 'KernelStack: 6272 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.418 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.681 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7917024 kB' 'MemAvailable: 9518524 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 464248 kB' 'Inactive: 1474944 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48676 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139012 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75960 kB' 'KernelStack: 6320 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.682 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.683 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7917024 kB' 'MemAvailable: 9518524 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 464248 kB' 'Inactive: 1474944 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48676 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139012 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75960 kB' 'KernelStack: 6320 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.684 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.685 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:55.686 nr_hugepages=1024 00:11:55.686 resv_hugepages=0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:55.686 surplus_hugepages=0 00:11:55.686 anon_hugepages=0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7917544 kB' 'MemAvailable: 9519044 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 464248 kB' 'Inactive: 1474944 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48676 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139012 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75960 kB' 'KernelStack: 6320 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.686 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.687 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7917544 kB' 'MemUsed: 4324424 kB' 'SwapCached: 0 kB' 'Active: 464180 kB' 'Inactive: 1474944 kB' 'Active(anon): 132212 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1817384 kB' 'Mapped: 48676 kB' 'AnonPages: 123572 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63052 kB' 'Slab: 139008 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 75956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.688 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:55.689 node0=1024 expecting 1024 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:55.689 09:25:56 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:55.690 00:11:55.690 real 0m0.835s 00:11:55.690 user 0m0.358s 00:11:55.690 sys 0m0.519s 00:11:55.690 09:25:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:55.690 09:25:56 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:55.690 ************************************ 00:11:55.690 END TEST even_2G_alloc 00:11:55.690 ************************************ 00:11:55.690 09:25:56 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:55.690 09:25:56 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:55.690 09:25:56 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:55.690 09:25:56 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:55.690 ************************************ 00:11:55.690 START TEST odd_alloc 00:11:55.690 ************************************ 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:55.690 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:56.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:56.519 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:56.519 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:56.519 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:56.519 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.519 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7907144 kB' 'MemAvailable: 9508648 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 464428 kB' 'Inactive: 1474948 kB' 'Active(anon): 132460 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123604 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139096 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 76044 kB' 'KernelStack: 6336 kB' 'PageTables: 4284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55000 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.520 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7909400 kB' 'MemAvailable: 9510904 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 464444 kB' 'Inactive: 1474948 kB' 'Active(anon): 132476 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123660 kB' 'Mapped: 48932 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139092 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 76040 kB' 'KernelStack: 6352 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 365852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.521 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:56 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.522 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7910172 kB' 'MemAvailable: 9511676 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 464284 kB' 'Inactive: 1474948 kB' 'Active(anon): 132316 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123484 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139084 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 76032 kB' 'KernelStack: 6336 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 365852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.523 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.524 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:56.525 nr_hugepages=1025 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:56.525 resv_hugepages=0 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:56.525 surplus_hugepages=0 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:56.525 anon_hugepages=0 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7909816 kB' 'MemAvailable: 9511320 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 464156 kB' 'Inactive: 1474948 kB' 'Active(anon): 132188 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123596 kB' 'Mapped: 48672 kB' 'Shmem: 10472 kB' 'KReclaimable: 63052 kB' 'Slab: 139076 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 76024 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 366220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.525 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.526 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7910240 kB' 'MemUsed: 4331728 kB' 'SwapCached: 0 kB' 'Active: 464140 kB' 'Inactive: 1474948 kB' 'Active(anon): 132172 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1817388 kB' 'Mapped: 48672 kB' 'AnonPages: 123560 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63052 kB' 'Slab: 139072 kB' 'SReclaimable: 63052 kB' 'SUnreclaim: 76020 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.527 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:11:56.528 node0=1025 expecting 1025 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:11:56.528 00:11:56.528 real 0m0.910s 00:11:56.528 user 0m0.414s 00:11:56.528 sys 0m0.540s 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:56.528 09:25:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:56.528 ************************************ 00:11:56.528 END TEST odd_alloc 00:11:56.528 ************************************ 00:11:56.787 09:25:57 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:56.787 09:25:57 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:56.787 09:25:57 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:56.787 09:25:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:56.787 ************************************ 00:11:56.787 START TEST custom_alloc 00:11:56.787 ************************************ 00:11:56.787 09:25:57 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:56.788 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:57.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.356 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.356 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.356 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.356 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.356 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8959188 kB' 'MemAvailable: 10560684 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459672 kB' 'Inactive: 1474944 kB' 'Active(anon): 127704 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 119028 kB' 'Mapped: 47932 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138816 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75772 kB' 'KernelStack: 6156 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.357 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.358 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8960104 kB' 'MemAvailable: 10561600 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459564 kB' 'Inactive: 1474944 kB' 'Active(anon): 127596 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118996 kB' 'Mapped: 47932 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138784 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75740 kB' 'KernelStack: 6188 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.622 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.623 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8959856 kB' 'MemAvailable: 10561352 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459512 kB' 'Inactive: 1474944 kB' 'Active(anon): 127544 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118948 kB' 'Mapped: 47932 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138744 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75700 kB' 'KernelStack: 6208 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 348180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.624 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.625 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:57.626 nr_hugepages=512 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:57.626 resv_hugepages=0 00:11:57.626 surplus_hugepages=0 00:11:57.626 anon_hugepages=0 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8959900 kB' 'MemAvailable: 10561396 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459400 kB' 'Inactive: 1474944 kB' 'Active(anon): 127432 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 118568 kB' 'Mapped: 47932 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138744 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75700 kB' 'KernelStack: 6176 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 345608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54808 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.626 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.627 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 8959900 kB' 'MemUsed: 3282068 kB' 'SwapCached: 0 kB' 'Active: 459396 kB' 'Inactive: 1474944 kB' 'Active(anon): 127428 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1817384 kB' 'Mapped: 47932 kB' 'AnonPages: 118788 kB' 'Shmem: 10472 kB' 'KernelStack: 6192 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63044 kB' 'Slab: 138712 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.628 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.629 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:57.630 node0=512 expecting 512 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:57.630 00:11:57.630 real 0m0.914s 00:11:57.630 user 0m0.367s 00:11:57.630 sys 0m0.583s 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.630 ************************************ 00:11:57.630 END TEST custom_alloc 00:11:57.630 ************************************ 00:11:57.630 09:25:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:57.630 09:25:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:11:57.630 09:25:58 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:57.630 09:25:58 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.630 09:25:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:57.630 ************************************ 00:11:57.630 START TEST no_shrink_alloc 00:11:57.630 ************************************ 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:57.630 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:58.198 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.461 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:58.461 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:58.461 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:58.461 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:58.461 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913240 kB' 'MemAvailable: 9514736 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459736 kB' 'Inactive: 1474944 kB' 'Active(anon): 127768 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118880 kB' 'Mapped: 47936 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138624 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75580 kB' 'KernelStack: 6240 kB' 'PageTables: 3776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54888 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.462 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913240 kB' 'MemAvailable: 9514736 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459772 kB' 'Inactive: 1474944 kB' 'Active(anon): 127804 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118992 kB' 'Mapped: 47936 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138624 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75580 kB' 'KernelStack: 6224 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.463 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.464 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913240 kB' 'MemAvailable: 9514736 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459548 kB' 'Inactive: 1474944 kB' 'Active(anon): 127580 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118704 kB' 'Mapped: 47936 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138624 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75580 kB' 'KernelStack: 6224 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.465 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.466 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:58.467 nr_hugepages=1024 00:11:58.467 resv_hugepages=0 00:11:58.467 surplus_hugepages=0 00:11:58.467 anon_hugepages=0 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913760 kB' 'MemAvailable: 9515256 kB' 'Buffers: 2436 kB' 'Cached: 1814948 kB' 'SwapCached: 0 kB' 'Active: 459772 kB' 'Inactive: 1474944 kB' 'Active(anon): 127804 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 118940 kB' 'Mapped: 47936 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138624 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75580 kB' 'KernelStack: 6224 kB' 'PageTables: 3740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54872 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.467 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.468 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:58.469 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7913760 kB' 'MemUsed: 4328208 kB' 'SwapCached: 0 kB' 'Active: 459744 kB' 'Inactive: 1474944 kB' 'Active(anon): 127776 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1817384 kB' 'Mapped: 47932 kB' 'AnonPages: 119164 kB' 'Shmem: 10472 kB' 'KernelStack: 6208 kB' 'PageTables: 3696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63044 kB' 'Slab: 138624 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.730 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:58.731 node0=1024 expecting 1024 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:58.731 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:11:58.732 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:11:58.732 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:11:58.732 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:58.732 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:58.991 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:59.251 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:59.251 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:59.251 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:59.251 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:59.251 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7911084 kB' 'MemAvailable: 9512584 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 460096 kB' 'Inactive: 1474948 kB' 'Active(anon): 128128 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119252 kB' 'Mapped: 48044 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138672 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75628 kB' 'KernelStack: 6272 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54856 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.251 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.252 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7910832 kB' 'MemAvailable: 9512332 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 459648 kB' 'Inactive: 1474948 kB' 'Active(anon): 127680 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118808 kB' 'Mapped: 47992 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138676 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75632 kB' 'KernelStack: 6208 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54840 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.516 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.517 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:59.518 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7910832 kB' 'MemAvailable: 9512332 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 459516 kB' 'Inactive: 1474948 kB' 'Active(anon): 127548 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118948 kB' 'Mapped: 47992 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138672 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75628 kB' 'KernelStack: 6208 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54824 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.519 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:59.520 nr_hugepages=1024 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:59.520 resv_hugepages=0 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:59.520 surplus_hugepages=0 00:11:59.520 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:59.521 anon_hugepages=0 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7910580 kB' 'MemAvailable: 9512080 kB' 'Buffers: 2436 kB' 'Cached: 1814952 kB' 'SwapCached: 0 kB' 'Active: 459520 kB' 'Inactive: 1474948 kB' 'Active(anon): 127552 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118688 kB' 'Mapped: 47992 kB' 'Shmem: 10472 kB' 'KReclaimable: 63044 kB' 'Slab: 138668 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75624 kB' 'KernelStack: 6208 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 345976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54824 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.521 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.522 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241968 kB' 'MemFree: 7910724 kB' 'MemUsed: 4331244 kB' 'SwapCached: 0 kB' 'Active: 459580 kB' 'Inactive: 1474948 kB' 'Active(anon): 127612 kB' 'Inactive(anon): 0 kB' 'Active(file): 331968 kB' 'Inactive(file): 1474948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1817388 kB' 'Mapped: 47936 kB' 'AnonPages: 118796 kB' 'Shmem: 10472 kB' 'KernelStack: 6224 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63044 kB' 'Slab: 138656 kB' 'SReclaimable: 63044 kB' 'SUnreclaim: 75612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.523 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:25:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:59.524 node0=1024 expecting 1024 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:59.524 00:11:59.524 real 0m1.858s 00:11:59.524 user 0m0.804s 00:11:59.524 sys 0m1.129s 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.524 09:26:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:59.524 ************************************ 00:11:59.524 END TEST no_shrink_alloc 00:11:59.524 ************************************ 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:59.525 09:26:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:59.525 ************************************ 00:11:59.525 END TEST hugepages 00:11:59.525 ************************************ 00:11:59.525 00:11:59.525 real 0m7.506s 00:11:59.525 user 0m3.141s 00:11:59.525 sys 0m4.608s 00:11:59.525 09:26:00 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:59.525 09:26:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:59.525 09:26:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:59.525 09:26:00 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:59.525 09:26:00 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:59.525 09:26:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:59.784 ************************************ 00:11:59.784 START TEST driver 00:11:59.784 ************************************ 00:11:59.784 09:26:00 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:59.784 * Looking for test storage... 00:11:59.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:59.784 09:26:00 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:11:59.784 09:26:00 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:59.784 09:26:00 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:06.354 09:26:06 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:12:06.355 09:26:06 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:06.355 09:26:06 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.355 09:26:06 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:06.355 ************************************ 00:12:06.355 START TEST guess_driver 00:12:06.355 ************************************ 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:12:06.355 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:12:06.355 Looking for driver=uio_pci_generic 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:12:06.355 09:26:06 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:06.613 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:12:06.613 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:12:06.613 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:07.180 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:07.439 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:12:07.440 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:12:07.440 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:12:07.440 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:12:07.440 09:26:07 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:12:07.440 09:26:07 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:07.440 09:26:07 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:14.015 00:12:14.015 real 0m7.647s 00:12:14.015 user 0m0.884s 00:12:14.015 sys 0m1.918s 00:12:14.015 09:26:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.015 ************************************ 00:12:14.015 END TEST guess_driver 00:12:14.015 ************************************ 00:12:14.015 09:26:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:12:14.015 ************************************ 00:12:14.015 END TEST driver 00:12:14.015 ************************************ 00:12:14.015 00:12:14.015 real 0m13.966s 00:12:14.015 user 0m1.278s 00:12:14.015 sys 0m2.994s 00:12:14.015 09:26:14 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:14.015 09:26:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:12:14.015 09:26:14 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:14.015 09:26:14 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:14.015 09:26:14 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:14.015 09:26:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:14.015 ************************************ 00:12:14.015 START TEST devices 00:12:14.015 ************************************ 00:12:14.015 09:26:14 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:12:14.015 * Looking for test storage... 00:12:14.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:12:14.016 09:26:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:12:14.016 09:26:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:12:14.016 09:26:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:12:14.016 09:26:14 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:14.949 09:26:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:14.949 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:15.209 09:26:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:12:15.209 No valid GPT data, bailing 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:12:15.209 No valid GPT data, bailing 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:12:15.209 No valid GPT data, bailing 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:15.209 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:12:15.209 09:26:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:12:15.469 No valid GPT data, bailing 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:12:15.469 09:26:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:12:15.469 09:26:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:12:15.469 09:26:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:12:15.469 No valid GPT data, bailing 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:15.469 09:26:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:12:15.469 09:26:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:12:15.469 09:26:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:12:15.469 09:26:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:12:15.469 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:12:15.470 09:26:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:12:15.470 09:26:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:12:15.470 09:26:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:12:15.470 09:26:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:12:15.470 09:26:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:12:15.470 09:26:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:12:15.470 No valid GPT data, bailing 00:12:15.470 09:26:16 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:12:15.470 09:26:16 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:12:15.470 09:26:16 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:12:15.470 09:26:16 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:12:15.470 09:26:16 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:12:15.470 09:26:16 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:12:15.470 09:26:16 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:12:15.470 09:26:16 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:12:15.470 09:26:16 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:12:15.470 09:26:16 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:12:15.470 09:26:16 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:12:15.470 09:26:16 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:15.470 09:26:16 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:15.470 09:26:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:15.470 ************************************ 00:12:15.470 START TEST nvme_mount 00:12:15.470 ************************************ 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:15.470 09:26:16 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:12:16.495 Creating new GPT entries in memory. 00:12:16.495 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:16.495 other utilities. 00:12:16.495 09:26:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:16.495 09:26:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:16.495 09:26:17 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:16.495 09:26:17 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:16.495 09:26:17 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:17.873 Creating new GPT entries in memory. 00:12:17.873 The operation has completed successfully. 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 59670 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.873 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:12:17.874 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:17.874 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:17.874 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.874 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.132 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.132 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.132 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.132 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.132 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.132 09:26:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.699 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.699 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:18.699 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:18.699 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:18.699 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:18.958 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:18.958 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:19.217 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:19.217 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:19.217 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:19.217 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:19.217 09:26:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:19.477 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.477 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:12:19.477 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:19.477 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:19.477 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.477 09:26:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:19.735 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.735 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:19.735 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.735 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:19.735 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.735 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:19.994 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.994 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:20.252 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:20.253 09:26:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:20.512 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.512 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:12:20.512 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:12:20.512 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:20.512 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.512 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:20.771 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.771 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:20.771 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.771 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:20.771 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:20.771 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:21.339 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:21.339 00:12:21.339 real 0m5.878s 00:12:21.339 user 0m1.525s 00:12:21.339 sys 0m2.057s 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.339 09:26:21 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:12:21.339 ************************************ 00:12:21.339 END TEST nvme_mount 00:12:21.339 ************************************ 00:12:21.599 09:26:21 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:12:21.599 09:26:21 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:21.599 09:26:21 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.599 09:26:21 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:21.599 ************************************ 00:12:21.599 START TEST dm_mount 00:12:21.599 ************************************ 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:12:21.599 09:26:22 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:12:22.608 Creating new GPT entries in memory. 00:12:22.608 GPT data structures destroyed! You may now partition the disk using fdisk or 00:12:22.608 other utilities. 00:12:22.608 09:26:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:12:22.608 09:26:23 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:22.608 09:26:23 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:22.608 09:26:23 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:22.608 09:26:23 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:12:23.547 Creating new GPT entries in memory. 00:12:23.547 The operation has completed successfully. 00:12:23.547 09:26:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:23.547 09:26:24 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:23.547 09:26:24 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:12:23.547 09:26:24 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:12:23.547 09:26:24 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:12:24.483 The operation has completed successfully. 00:12:24.483 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:12:24.483 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:12:24.483 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 60304 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:24.742 09:26:25 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.002 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.261 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.261 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.261 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.261 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.545 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:25.545 09:26:25 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:12:25.804 09:26:26 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:12:26.064 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.064 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:12:26.064 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:12:26.064 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:26.064 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.064 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:26.323 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.324 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:26.324 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.324 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:26.324 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.324 09:26:26 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:26.583 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:26.583 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:12:26.842 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:26.842 09:26:27 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:12:27.102 00:12:27.102 real 0m5.458s 00:12:27.102 user 0m0.996s 00:12:27.102 sys 0m1.382s 00:12:27.102 09:26:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.102 09:26:27 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:12:27.102 ************************************ 00:12:27.102 END TEST dm_mount 00:12:27.102 ************************************ 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:12:27.102 09:26:27 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:12:27.362 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:12:27.362 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:12:27.362 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:12:27.362 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:12:27.362 09:26:27 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:12:27.362 ************************************ 00:12:27.362 END TEST devices 00:12:27.362 ************************************ 00:12:27.362 00:12:27.362 real 0m13.655s 00:12:27.362 user 0m3.436s 00:12:27.362 sys 0m4.561s 00:12:27.362 09:26:27 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.362 09:26:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:12:27.362 00:12:27.362 real 0m51.075s 00:12:27.362 user 0m11.365s 00:12:27.362 sys 0m21.737s 00:12:27.362 09:26:27 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:27.362 09:26:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:12:27.362 ************************************ 00:12:27.362 END TEST setup.sh 00:12:27.362 ************************************ 00:12:27.362 09:26:27 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:27.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:28.500 Hugepages 00:12:28.500 node hugesize free / total 00:12:28.500 node0 1048576kB 0 / 0 00:12:28.500 node0 2048kB 2048 / 2048 00:12:28.500 00:12:28.500 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:28.758 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:28.758 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:12:28.758 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:29.017 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:12:29.017 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:12:29.017 09:26:29 -- spdk/autotest.sh@130 -- # uname -s 00:12:29.017 09:26:29 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:12:29.017 09:26:29 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:12:29.017 09:26:29 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:29.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:30.517 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.517 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.517 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.517 09:26:31 -- common/autotest_common.sh@1532 -- # sleep 1 00:12:31.449 09:26:32 -- common/autotest_common.sh@1533 -- # bdfs=() 00:12:31.449 09:26:32 -- common/autotest_common.sh@1533 -- # local bdfs 00:12:31.449 09:26:32 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:12:31.449 09:26:32 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:12:31.449 09:26:32 -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:31.449 09:26:32 -- common/autotest_common.sh@1513 -- # local bdfs 00:12:31.449 09:26:32 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:31.449 09:26:32 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:31.449 09:26:32 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:31.707 09:26:32 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:31.707 09:26:32 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:31.707 09:26:32 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:32.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:32.273 Waiting for block devices as requested 00:12:32.531 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.531 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.531 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:32.789 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:38.077 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:38.077 09:26:38 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:12:38.077 09:26:38 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:12:38.077 09:26:38 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:12:38.077 09:26:38 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:12:38.077 09:26:38 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:38.077 09:26:38 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:12:38.077 09:26:38 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:12:38.077 09:26:38 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:12:38.077 09:26:38 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:12:38.077 09:26:38 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:12:38.077 09:26:38 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:12:38.077 09:26:38 -- common/autotest_common.sh@1545 -- # grep oacs 00:12:38.077 09:26:38 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:12:38.077 09:26:38 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:12:38.077 09:26:38 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:12:38.077 09:26:38 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:12:38.077 09:26:38 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:12:38.077 09:26:38 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:12:38.077 09:26:38 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:12:38.077 09:26:38 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1557 -- # continue 00:12:38.078 09:26:38 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:12:38.078 09:26:38 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # grep oacs 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:12:38.078 09:26:38 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1557 -- # continue 00:12:38.078 09:26:38 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:12:38.078 09:26:38 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # grep 0000:00:12.0/nvme/nvme 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme2 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # grep oacs 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:12:38.078 09:26:38 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1557 -- # continue 00:12:38.078 09:26:38 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:12:38.078 09:26:38 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # grep 0000:00:13.0/nvme/nvme 00:12:38.078 09:26:38 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme3 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # grep oacs 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:12:38.078 09:26:38 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme3 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:12:38.078 09:26:38 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:12:38.078 09:26:38 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:12:38.078 09:26:38 -- common/autotest_common.sh@1557 -- # continue 00:12:38.078 09:26:38 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:12:38.078 09:26:38 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:38.078 09:26:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.078 09:26:38 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:12:38.078 09:26:38 -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:38.078 09:26:38 -- common/autotest_common.sh@10 -- # set +x 00:12:38.078 09:26:38 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:38.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:39.215 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.215 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.215 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.475 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:39.475 09:26:39 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:12:39.475 09:26:39 -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.475 09:26:39 -- common/autotest_common.sh@10 -- # set +x 00:12:39.475 09:26:40 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:12:39.475 09:26:40 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:12:39.475 09:26:40 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:12:39.475 09:26:40 -- common/autotest_common.sh@1577 -- # bdfs=() 00:12:39.475 09:26:40 -- common/autotest_common.sh@1577 -- # local bdfs 00:12:39.475 09:26:40 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:12:39.475 09:26:40 -- common/autotest_common.sh@1513 -- # bdfs=() 00:12:39.475 09:26:40 -- common/autotest_common.sh@1513 -- # local bdfs 00:12:39.475 09:26:40 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:39.475 09:26:40 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:39.475 09:26:40 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:12:39.735 09:26:40 -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:12:39.735 09:26:40 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:39.735 09:26:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # device=0x0010 00:12:39.735 09:26:40 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:39.735 09:26:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # device=0x0010 00:12:39.735 09:26:40 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:39.735 09:26:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # device=0x0010 00:12:39.735 09:26:40 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:39.735 09:26:40 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:12:39.735 09:26:40 -- common/autotest_common.sh@1580 -- # device=0x0010 00:12:39.735 09:26:40 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:12:39.735 09:26:40 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:12:39.735 09:26:40 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:12:39.735 09:26:40 -- common/autotest_common.sh@1593 -- # return 0 00:12:39.735 09:26:40 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:12:39.735 09:26:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:12:39.735 09:26:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:39.735 09:26:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:12:39.735 09:26:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:12:39.735 09:26:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.735 09:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:39.735 09:26:40 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:12:39.735 09:26:40 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:39.735 09:26:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.735 09:26:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.735 09:26:40 -- common/autotest_common.sh@10 -- # set +x 00:12:39.735 ************************************ 00:12:39.735 START TEST env 00:12:39.735 ************************************ 00:12:39.735 09:26:40 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:12:39.735 * Looking for test storage... 00:12:39.735 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:12:39.735 09:26:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:39.735 09:26:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:39.735 09:26:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.735 09:26:40 env -- common/autotest_common.sh@10 -- # set +x 00:12:39.735 ************************************ 00:12:39.735 START TEST env_memory 00:12:39.735 ************************************ 00:12:39.735 09:26:40 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:12:39.735 00:12:39.735 00:12:39.735 CUnit - A unit testing framework for C - Version 2.1-3 00:12:39.735 http://cunit.sourceforge.net/ 00:12:39.735 00:12:39.735 00:12:39.735 Suite: memory 00:12:39.995 Test: alloc and free memory map ...[2024-07-25 09:26:40.355268] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:12:39.995 passed 00:12:39.995 Test: mem map translation ...[2024-07-25 09:26:40.403249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:12:39.995 [2024-07-25 09:26:40.403328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:12:39.995 [2024-07-25 09:26:40.403406] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:12:39.995 [2024-07-25 09:26:40.403431] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:12:39.995 passed 00:12:39.995 Test: mem map registration ...[2024-07-25 09:26:40.491556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:12:39.995 [2024-07-25 09:26:40.491651] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:12:39.995 passed 00:12:39.995 Test: mem map adjacent registrations ...passed 00:12:39.995 00:12:39.995 Run Summary: Type Total Ran Passed Failed Inactive 00:12:39.995 suites 1 1 n/a 0 0 00:12:39.995 tests 4 4 4 0 0 00:12:39.995 asserts 152 152 152 0 n/a 00:12:39.995 00:12:39.995 Elapsed time = 0.282 seconds 00:12:40.255 00:12:40.255 real 0m0.335s 00:12:40.255 user 0m0.304s 00:12:40.255 sys 0m0.023s 00:12:40.255 09:26:40 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:40.255 09:26:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:12:40.255 ************************************ 00:12:40.255 END TEST env_memory 00:12:40.255 ************************************ 00:12:40.255 09:26:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:40.255 09:26:40 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:40.255 09:26:40 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:40.255 09:26:40 env -- common/autotest_common.sh@10 -- # set +x 00:12:40.255 ************************************ 00:12:40.255 START TEST env_vtophys 00:12:40.255 ************************************ 00:12:40.255 09:26:40 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:12:40.255 EAL: lib.eal log level changed from notice to debug 00:12:40.255 EAL: Detected lcore 0 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 1 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 2 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 3 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 4 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 5 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 6 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 7 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 8 as core 0 on socket 0 00:12:40.255 EAL: Detected lcore 9 as core 0 on socket 0 00:12:40.255 EAL: Maximum logical cores by configuration: 128 00:12:40.255 EAL: Detected CPU lcores: 10 00:12:40.255 EAL: Detected NUMA nodes: 1 00:12:40.255 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:12:40.255 EAL: Detected shared linkage of DPDK 00:12:40.255 EAL: No shared files mode enabled, IPC will be disabled 00:12:40.255 EAL: Selected IOVA mode 'PA' 00:12:40.255 EAL: Probing VFIO support... 00:12:40.255 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:40.255 EAL: VFIO modules not loaded, skipping VFIO support... 00:12:40.255 EAL: Ask a virtual area of 0x2e000 bytes 00:12:40.255 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:12:40.255 EAL: Setting up physically contiguous memory... 00:12:40.255 EAL: Setting maximum number of open files to 524288 00:12:40.255 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:12:40.255 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:12:40.255 EAL: Ask a virtual area of 0x61000 bytes 00:12:40.255 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:12:40.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:40.255 EAL: Ask a virtual area of 0x400000000 bytes 00:12:40.255 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:12:40.255 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:12:40.255 EAL: Ask a virtual area of 0x61000 bytes 00:12:40.255 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:12:40.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:40.255 EAL: Ask a virtual area of 0x400000000 bytes 00:12:40.255 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:12:40.255 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:12:40.255 EAL: Ask a virtual area of 0x61000 bytes 00:12:40.255 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:12:40.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:40.255 EAL: Ask a virtual area of 0x400000000 bytes 00:12:40.255 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:12:40.255 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:12:40.255 EAL: Ask a virtual area of 0x61000 bytes 00:12:40.255 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:12:40.255 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:12:40.255 EAL: Ask a virtual area of 0x400000000 bytes 00:12:40.255 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:12:40.255 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:12:40.255 EAL: Hugepages will be freed exactly as allocated. 00:12:40.255 EAL: No shared files mode enabled, IPC is disabled 00:12:40.255 EAL: No shared files mode enabled, IPC is disabled 00:12:40.255 EAL: TSC frequency is ~2290000 KHz 00:12:40.255 EAL: Main lcore 0 is ready (tid=7f3924810a40;cpuset=[0]) 00:12:40.255 EAL: Trying to obtain current memory policy. 00:12:40.255 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:40.255 EAL: Restoring previous memory policy: 0 00:12:40.255 EAL: request: mp_malloc_sync 00:12:40.255 EAL: No shared files mode enabled, IPC is disabled 00:12:40.255 EAL: Heap on socket 0 was expanded by 2MB 00:12:40.255 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:12:40.516 EAL: No PCI address specified using 'addr=' in: bus=pci 00:12:40.516 EAL: Mem event callback 'spdk:(nil)' registered 00:12:40.516 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:12:40.516 00:12:40.516 00:12:40.516 CUnit - A unit testing framework for C - Version 2.1-3 00:12:40.516 http://cunit.sourceforge.net/ 00:12:40.516 00:12:40.516 00:12:40.516 Suite: components_suite 00:12:40.775 Test: vtophys_malloc_test ...passed 00:12:40.775 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:12:40.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:40.775 EAL: Restoring previous memory policy: 4 00:12:40.775 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.775 EAL: request: mp_malloc_sync 00:12:40.775 EAL: No shared files mode enabled, IPC is disabled 00:12:40.775 EAL: Heap on socket 0 was expanded by 4MB 00:12:40.775 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.775 EAL: request: mp_malloc_sync 00:12:40.775 EAL: No shared files mode enabled, IPC is disabled 00:12:40.775 EAL: Heap on socket 0 was shrunk by 4MB 00:12:40.775 EAL: Trying to obtain current memory policy. 00:12:40.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:40.775 EAL: Restoring previous memory policy: 4 00:12:40.775 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.775 EAL: request: mp_malloc_sync 00:12:40.775 EAL: No shared files mode enabled, IPC is disabled 00:12:40.775 EAL: Heap on socket 0 was expanded by 6MB 00:12:40.775 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.775 EAL: request: mp_malloc_sync 00:12:40.775 EAL: No shared files mode enabled, IPC is disabled 00:12:40.775 EAL: Heap on socket 0 was shrunk by 6MB 00:12:40.775 EAL: Trying to obtain current memory policy. 00:12:40.775 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:40.775 EAL: Restoring previous memory policy: 4 00:12:40.775 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.775 EAL: request: mp_malloc_sync 00:12:40.775 EAL: No shared files mode enabled, IPC is disabled 00:12:40.775 EAL: Heap on socket 0 was expanded by 10MB 00:12:40.775 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.776 EAL: request: mp_malloc_sync 00:12:40.776 EAL: No shared files mode enabled, IPC is disabled 00:12:40.776 EAL: Heap on socket 0 was shrunk by 10MB 00:12:40.776 EAL: Trying to obtain current memory policy. 00:12:40.776 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:40.776 EAL: Restoring previous memory policy: 4 00:12:40.776 EAL: Calling mem event callback 'spdk:(nil)' 00:12:40.776 EAL: request: mp_malloc_sync 00:12:40.776 EAL: No shared files mode enabled, IPC is disabled 00:12:40.776 EAL: Heap on socket 0 was expanded by 18MB 00:12:41.034 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.034 EAL: request: mp_malloc_sync 00:12:41.034 EAL: No shared files mode enabled, IPC is disabled 00:12:41.034 EAL: Heap on socket 0 was shrunk by 18MB 00:12:41.034 EAL: Trying to obtain current memory policy. 00:12:41.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:41.034 EAL: Restoring previous memory policy: 4 00:12:41.034 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.034 EAL: request: mp_malloc_sync 00:12:41.034 EAL: No shared files mode enabled, IPC is disabled 00:12:41.034 EAL: Heap on socket 0 was expanded by 34MB 00:12:41.034 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.034 EAL: request: mp_malloc_sync 00:12:41.034 EAL: No shared files mode enabled, IPC is disabled 00:12:41.034 EAL: Heap on socket 0 was shrunk by 34MB 00:12:41.034 EAL: Trying to obtain current memory policy. 00:12:41.034 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:41.034 EAL: Restoring previous memory policy: 4 00:12:41.034 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.034 EAL: request: mp_malloc_sync 00:12:41.034 EAL: No shared files mode enabled, IPC is disabled 00:12:41.034 EAL: Heap on socket 0 was expanded by 66MB 00:12:41.293 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.293 EAL: request: mp_malloc_sync 00:12:41.293 EAL: No shared files mode enabled, IPC is disabled 00:12:41.293 EAL: Heap on socket 0 was shrunk by 66MB 00:12:41.293 EAL: Trying to obtain current memory policy. 00:12:41.293 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:41.293 EAL: Restoring previous memory policy: 4 00:12:41.293 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.293 EAL: request: mp_malloc_sync 00:12:41.293 EAL: No shared files mode enabled, IPC is disabled 00:12:41.293 EAL: Heap on socket 0 was expanded by 130MB 00:12:41.866 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.866 EAL: request: mp_malloc_sync 00:12:41.866 EAL: No shared files mode enabled, IPC is disabled 00:12:41.866 EAL: Heap on socket 0 was shrunk by 130MB 00:12:41.866 EAL: Trying to obtain current memory policy. 00:12:41.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:41.866 EAL: Restoring previous memory policy: 4 00:12:41.866 EAL: Calling mem event callback 'spdk:(nil)' 00:12:41.866 EAL: request: mp_malloc_sync 00:12:41.866 EAL: No shared files mode enabled, IPC is disabled 00:12:41.866 EAL: Heap on socket 0 was expanded by 258MB 00:12:42.442 EAL: Calling mem event callback 'spdk:(nil)' 00:12:42.701 EAL: request: mp_malloc_sync 00:12:42.701 EAL: No shared files mode enabled, IPC is disabled 00:12:42.701 EAL: Heap on socket 0 was shrunk by 258MB 00:12:42.958 EAL: Trying to obtain current memory policy. 00:12:42.958 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:43.218 EAL: Restoring previous memory policy: 4 00:12:43.218 EAL: Calling mem event callback 'spdk:(nil)' 00:12:43.218 EAL: request: mp_malloc_sync 00:12:43.218 EAL: No shared files mode enabled, IPC is disabled 00:12:43.218 EAL: Heap on socket 0 was expanded by 514MB 00:12:44.154 EAL: Calling mem event callback 'spdk:(nil)' 00:12:44.413 EAL: request: mp_malloc_sync 00:12:44.413 EAL: No shared files mode enabled, IPC is disabled 00:12:44.413 EAL: Heap on socket 0 was shrunk by 514MB 00:12:45.348 EAL: Trying to obtain current memory policy. 00:12:45.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:12:45.348 EAL: Restoring previous memory policy: 4 00:12:45.348 EAL: Calling mem event callback 'spdk:(nil)' 00:12:45.348 EAL: request: mp_malloc_sync 00:12:45.348 EAL: No shared files mode enabled, IPC is disabled 00:12:45.348 EAL: Heap on socket 0 was expanded by 1026MB 00:12:47.895 EAL: Calling mem event callback 'spdk:(nil)' 00:12:47.895 EAL: request: mp_malloc_sync 00:12:47.895 EAL: No shared files mode enabled, IPC is disabled 00:12:47.895 EAL: Heap on socket 0 was shrunk by 1026MB 00:12:49.800 passed 00:12:49.800 00:12:49.800 Run Summary: Type Total Ran Passed Failed Inactive 00:12:49.800 suites 1 1 n/a 0 0 00:12:49.800 tests 2 2 2 0 0 00:12:49.800 asserts 5474 5474 5474 0 n/a 00:12:49.800 00:12:49.800 Elapsed time = 9.018 seconds 00:12:49.800 EAL: Calling mem event callback 'spdk:(nil)' 00:12:49.800 EAL: request: mp_malloc_sync 00:12:49.800 EAL: No shared files mode enabled, IPC is disabled 00:12:49.800 EAL: Heap on socket 0 was shrunk by 2MB 00:12:49.800 EAL: No shared files mode enabled, IPC is disabled 00:12:49.800 EAL: No shared files mode enabled, IPC is disabled 00:12:49.800 EAL: No shared files mode enabled, IPC is disabled 00:12:49.800 00:12:49.800 real 0m9.379s 00:12:49.800 user 0m8.396s 00:12:49.801 sys 0m0.823s 00:12:49.801 09:26:50 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.801 09:26:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:12:49.801 ************************************ 00:12:49.801 END TEST env_vtophys 00:12:49.801 ************************************ 00:12:49.801 09:26:50 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:49.801 09:26:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:49.801 09:26:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.801 09:26:50 env -- common/autotest_common.sh@10 -- # set +x 00:12:49.801 ************************************ 00:12:49.801 START TEST env_pci 00:12:49.801 ************************************ 00:12:49.801 09:26:50 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:12:49.801 00:12:49.801 00:12:49.801 CUnit - A unit testing framework for C - Version 2.1-3 00:12:49.801 http://cunit.sourceforge.net/ 00:12:49.801 00:12:49.801 00:12:49.801 Suite: pci 00:12:49.801 Test: pci_hook ...[2024-07-25 09:26:50.139985] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 62176 has claimed it 00:12:49.801 passed 00:12:49.801 00:12:49.801 Run Summary: Type Total Ran Passed Failed Inactive 00:12:49.801 suites 1 1 n/a 0 0 00:12:49.801 tests 1 1 1 0 0 00:12:49.801 asserts 25 25 25 0 n/a 00:12:49.801 00:12:49.801 Elapsed time = 0.014 seconds 00:12:49.801 EAL: Cannot find device (10000:00:01.0) 00:12:49.801 EAL: Failed to attach device on primary process 00:12:49.801 00:12:49.801 real 0m0.103s 00:12:49.801 user 0m0.052s 00:12:49.801 sys 0m0.050s 00:12:49.801 09:26:50 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.801 09:26:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:12:49.801 ************************************ 00:12:49.801 END TEST env_pci 00:12:49.801 ************************************ 00:12:49.801 09:26:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:12:49.801 09:26:50 env -- env/env.sh@15 -- # uname 00:12:49.801 09:26:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:12:49.801 09:26:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:12:49.801 09:26:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:49.801 09:26:50 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:49.801 09:26:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.801 09:26:50 env -- common/autotest_common.sh@10 -- # set +x 00:12:49.801 ************************************ 00:12:49.801 START TEST env_dpdk_post_init 00:12:49.801 ************************************ 00:12:49.801 09:26:50 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:12:49.801 EAL: Detected CPU lcores: 10 00:12:49.801 EAL: Detected NUMA nodes: 1 00:12:49.801 EAL: Detected shared linkage of DPDK 00:12:49.801 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:49.801 EAL: Selected IOVA mode 'PA' 00:12:50.060 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:50.060 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:12:50.060 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:12:50.060 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:12:50.060 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:12:50.060 Starting DPDK initialization... 00:12:50.060 Starting SPDK post initialization... 00:12:50.060 SPDK NVMe probe 00:12:50.060 Attaching to 0000:00:10.0 00:12:50.060 Attaching to 0000:00:11.0 00:12:50.060 Attaching to 0000:00:12.0 00:12:50.060 Attaching to 0000:00:13.0 00:12:50.060 Attached to 0000:00:10.0 00:12:50.060 Attached to 0000:00:11.0 00:12:50.060 Attached to 0000:00:13.0 00:12:50.060 Attached to 0000:00:12.0 00:12:50.060 Cleaning up... 00:12:50.060 00:12:50.060 real 0m0.291s 00:12:50.060 user 0m0.105s 00:12:50.060 sys 0m0.089s 00:12:50.060 09:26:50 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.060 ************************************ 00:12:50.060 END TEST env_dpdk_post_init 00:12:50.060 ************************************ 00:12:50.060 09:26:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:12:50.060 09:26:50 env -- env/env.sh@26 -- # uname 00:12:50.060 09:26:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:12:50.061 09:26:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:50.061 09:26:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:50.061 09:26:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.061 09:26:50 env -- common/autotest_common.sh@10 -- # set +x 00:12:50.061 ************************************ 00:12:50.061 START TEST env_mem_callbacks 00:12:50.061 ************************************ 00:12:50.061 09:26:50 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:12:50.061 EAL: Detected CPU lcores: 10 00:12:50.061 EAL: Detected NUMA nodes: 1 00:12:50.061 EAL: Detected shared linkage of DPDK 00:12:50.320 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:12:50.320 EAL: Selected IOVA mode 'PA' 00:12:50.320 00:12:50.320 00:12:50.320 CUnit - A unit testing framework for C - Version 2.1-3 00:12:50.320 http://cunit.sourceforge.net/ 00:12:50.320 00:12:50.320 TELEMETRY: No legacy callbacks, legacy socket not created 00:12:50.320 00:12:50.320 Suite: memory 00:12:50.320 Test: test ... 00:12:50.320 register 0x200000200000 2097152 00:12:50.320 malloc 3145728 00:12:50.320 register 0x200000400000 4194304 00:12:50.320 buf 0x2000004fffc0 len 3145728 PASSED 00:12:50.320 malloc 64 00:12:50.320 buf 0x2000004ffec0 len 64 PASSED 00:12:50.320 malloc 4194304 00:12:50.320 register 0x200000800000 6291456 00:12:50.320 buf 0x2000009fffc0 len 4194304 PASSED 00:12:50.320 free 0x2000004fffc0 3145728 00:12:50.320 free 0x2000004ffec0 64 00:12:50.320 unregister 0x200000400000 4194304 PASSED 00:12:50.320 free 0x2000009fffc0 4194304 00:12:50.320 unregister 0x200000800000 6291456 PASSED 00:12:50.320 malloc 8388608 00:12:50.320 register 0x200000400000 10485760 00:12:50.320 buf 0x2000005fffc0 len 8388608 PASSED 00:12:50.320 free 0x2000005fffc0 8388608 00:12:50.320 unregister 0x200000400000 10485760 PASSED 00:12:50.320 passed 00:12:50.320 00:12:50.320 Run Summary: Type Total Ran Passed Failed Inactive 00:12:50.320 suites 1 1 n/a 0 0 00:12:50.320 tests 1 1 1 0 0 00:12:50.320 asserts 15 15 15 0 n/a 00:12:50.320 00:12:50.320 Elapsed time = 0.077 seconds 00:12:50.320 00:12:50.320 real 0m0.269s 00:12:50.320 user 0m0.104s 00:12:50.320 sys 0m0.062s 00:12:50.320 09:26:50 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.320 09:26:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:12:50.320 ************************************ 00:12:50.320 END TEST env_mem_callbacks 00:12:50.320 ************************************ 00:12:50.320 00:12:50.320 real 0m10.767s 00:12:50.320 user 0m9.106s 00:12:50.320 sys 0m1.299s 00:12:50.320 09:26:50 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.579 09:26:50 env -- common/autotest_common.sh@10 -- # set +x 00:12:50.579 ************************************ 00:12:50.579 END TEST env 00:12:50.579 ************************************ 00:12:50.579 09:26:50 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:50.579 09:26:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:50.579 09:26:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.579 09:26:50 -- common/autotest_common.sh@10 -- # set +x 00:12:50.579 ************************************ 00:12:50.579 START TEST rpc 00:12:50.579 ************************************ 00:12:50.579 09:26:50 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:12:50.579 * Looking for test storage... 00:12:50.579 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:50.579 09:26:51 rpc -- rpc/rpc.sh@65 -- # spdk_pid=62295 00:12:50.579 09:26:51 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:50.579 09:26:51 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:12:50.579 09:26:51 rpc -- rpc/rpc.sh@67 -- # waitforlisten 62295 00:12:50.579 09:26:51 rpc -- common/autotest_common.sh@831 -- # '[' -z 62295 ']' 00:12:50.579 09:26:51 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.579 09:26:51 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.579 09:26:51 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.579 09:26:51 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.579 09:26:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.838 [2024-07-25 09:26:51.208222] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:50.838 [2024-07-25 09:26:51.208481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62295 ] 00:12:50.838 [2024-07-25 09:26:51.373384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.097 [2024-07-25 09:26:51.624434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:12:51.097 [2024-07-25 09:26:51.624544] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 62295' to capture a snapshot of events at runtime. 00:12:51.097 [2024-07-25 09:26:51.624613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.097 [2024-07-25 09:26:51.624649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.097 [2024-07-25 09:26:51.624714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid62295 for offline analysis/debug. 00:12:51.097 [2024-07-25 09:26:51.624786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.035 09:26:52 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.035 09:26:52 rpc -- common/autotest_common.sh@864 -- # return 0 00:12:52.035 09:26:52 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:52.035 09:26:52 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:12:52.035 09:26:52 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:12:52.035 09:26:52 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:12:52.035 09:26:52 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:52.035 09:26:52 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.035 09:26:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 ************************************ 00:12:52.035 START TEST rpc_integrity 00:12:52.035 ************************************ 00:12:52.035 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:12:52.035 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:52.035 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.035 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.035 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.035 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:52.035 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:52.295 { 00:12:52.295 "name": "Malloc0", 00:12:52.295 "aliases": [ 00:12:52.295 "ded12bc9-189e-4cb1-a234-c83d1a522baf" 00:12:52.295 ], 00:12:52.295 "product_name": "Malloc disk", 00:12:52.295 "block_size": 512, 00:12:52.295 "num_blocks": 16384, 00:12:52.295 "uuid": "ded12bc9-189e-4cb1-a234-c83d1a522baf", 00:12:52.295 "assigned_rate_limits": { 00:12:52.295 "rw_ios_per_sec": 0, 00:12:52.295 "rw_mbytes_per_sec": 0, 00:12:52.295 "r_mbytes_per_sec": 0, 00:12:52.295 "w_mbytes_per_sec": 0 00:12:52.295 }, 00:12:52.295 "claimed": false, 00:12:52.295 "zoned": false, 00:12:52.295 "supported_io_types": { 00:12:52.295 "read": true, 00:12:52.295 "write": true, 00:12:52.295 "unmap": true, 00:12:52.295 "flush": true, 00:12:52.295 "reset": true, 00:12:52.295 "nvme_admin": false, 00:12:52.295 "nvme_io": false, 00:12:52.295 "nvme_io_md": false, 00:12:52.295 "write_zeroes": true, 00:12:52.295 "zcopy": true, 00:12:52.295 "get_zone_info": false, 00:12:52.295 "zone_management": false, 00:12:52.295 "zone_append": false, 00:12:52.295 "compare": false, 00:12:52.295 "compare_and_write": false, 00:12:52.295 "abort": true, 00:12:52.295 "seek_hole": false, 00:12:52.295 "seek_data": false, 00:12:52.295 "copy": true, 00:12:52.295 "nvme_iov_md": false 00:12:52.295 }, 00:12:52.295 "memory_domains": [ 00:12:52.295 { 00:12:52.295 "dma_device_id": "system", 00:12:52.295 "dma_device_type": 1 00:12:52.295 }, 00:12:52.295 { 00:12:52.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.295 "dma_device_type": 2 00:12:52.295 } 00:12:52.295 ], 00:12:52.295 "driver_specific": {} 00:12:52.295 } 00:12:52.295 ]' 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.295 [2024-07-25 09:26:52.773214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:12:52.295 [2024-07-25 09:26:52.773304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.295 [2024-07-25 09:26:52.773338] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:52.295 [2024-07-25 09:26:52.773349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.295 [2024-07-25 09:26:52.775729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.295 [2024-07-25 09:26:52.775821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:52.295 Passthru0 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.295 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.295 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:52.295 { 00:12:52.295 "name": "Malloc0", 00:12:52.295 "aliases": [ 00:12:52.295 "ded12bc9-189e-4cb1-a234-c83d1a522baf" 00:12:52.295 ], 00:12:52.295 "product_name": "Malloc disk", 00:12:52.295 "block_size": 512, 00:12:52.295 "num_blocks": 16384, 00:12:52.295 "uuid": "ded12bc9-189e-4cb1-a234-c83d1a522baf", 00:12:52.295 "assigned_rate_limits": { 00:12:52.295 "rw_ios_per_sec": 0, 00:12:52.295 "rw_mbytes_per_sec": 0, 00:12:52.295 "r_mbytes_per_sec": 0, 00:12:52.295 "w_mbytes_per_sec": 0 00:12:52.295 }, 00:12:52.295 "claimed": true, 00:12:52.295 "claim_type": "exclusive_write", 00:12:52.295 "zoned": false, 00:12:52.295 "supported_io_types": { 00:12:52.295 "read": true, 00:12:52.295 "write": true, 00:12:52.295 "unmap": true, 00:12:52.295 "flush": true, 00:12:52.295 "reset": true, 00:12:52.295 "nvme_admin": false, 00:12:52.295 "nvme_io": false, 00:12:52.295 "nvme_io_md": false, 00:12:52.295 "write_zeroes": true, 00:12:52.295 "zcopy": true, 00:12:52.295 "get_zone_info": false, 00:12:52.295 "zone_management": false, 00:12:52.295 "zone_append": false, 00:12:52.295 "compare": false, 00:12:52.295 "compare_and_write": false, 00:12:52.295 "abort": true, 00:12:52.295 "seek_hole": false, 00:12:52.295 "seek_data": false, 00:12:52.295 "copy": true, 00:12:52.295 "nvme_iov_md": false 00:12:52.295 }, 00:12:52.295 "memory_domains": [ 00:12:52.295 { 00:12:52.295 "dma_device_id": "system", 00:12:52.295 "dma_device_type": 1 00:12:52.295 }, 00:12:52.295 { 00:12:52.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.295 "dma_device_type": 2 00:12:52.295 } 00:12:52.295 ], 00:12:52.296 "driver_specific": {} 00:12:52.296 }, 00:12:52.296 { 00:12:52.296 "name": "Passthru0", 00:12:52.296 "aliases": [ 00:12:52.296 "bd50b40a-76c0-574d-912b-264d3b9995eb" 00:12:52.296 ], 00:12:52.296 "product_name": "passthru", 00:12:52.296 "block_size": 512, 00:12:52.296 "num_blocks": 16384, 00:12:52.296 "uuid": "bd50b40a-76c0-574d-912b-264d3b9995eb", 00:12:52.296 "assigned_rate_limits": { 00:12:52.296 "rw_ios_per_sec": 0, 00:12:52.296 "rw_mbytes_per_sec": 0, 00:12:52.296 "r_mbytes_per_sec": 0, 00:12:52.296 "w_mbytes_per_sec": 0 00:12:52.296 }, 00:12:52.296 "claimed": false, 00:12:52.296 "zoned": false, 00:12:52.296 "supported_io_types": { 00:12:52.296 "read": true, 00:12:52.296 "write": true, 00:12:52.296 "unmap": true, 00:12:52.296 "flush": true, 00:12:52.296 "reset": true, 00:12:52.296 "nvme_admin": false, 00:12:52.296 "nvme_io": false, 00:12:52.296 "nvme_io_md": false, 00:12:52.296 "write_zeroes": true, 00:12:52.296 "zcopy": true, 00:12:52.296 "get_zone_info": false, 00:12:52.296 "zone_management": false, 00:12:52.296 "zone_append": false, 00:12:52.296 "compare": false, 00:12:52.296 "compare_and_write": false, 00:12:52.296 "abort": true, 00:12:52.296 "seek_hole": false, 00:12:52.296 "seek_data": false, 00:12:52.296 "copy": true, 00:12:52.296 "nvme_iov_md": false 00:12:52.296 }, 00:12:52.296 "memory_domains": [ 00:12:52.296 { 00:12:52.296 "dma_device_id": "system", 00:12:52.296 "dma_device_type": 1 00:12:52.296 }, 00:12:52.296 { 00:12:52.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.296 "dma_device_type": 2 00:12:52.296 } 00:12:52.296 ], 00:12:52.296 "driver_specific": { 00:12:52.296 "passthru": { 00:12:52.296 "name": "Passthru0", 00:12:52.296 "base_bdev_name": "Malloc0" 00:12:52.296 } 00:12:52.296 } 00:12:52.296 } 00:12:52.296 ]' 00:12:52.296 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:52.296 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:52.296 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:52.296 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.296 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.296 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.296 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:12:52.296 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.296 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.556 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:52.556 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.556 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.556 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:52.556 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:52.556 ************************************ 00:12:52.556 END TEST rpc_integrity 00:12:52.556 ************************************ 00:12:52.556 09:26:52 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:52.556 00:12:52.556 real 0m0.374s 00:12:52.556 user 0m0.207s 00:12:52.556 sys 0m0.050s 00:12:52.556 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.556 09:26:52 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:53 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:12:52.556 09:26:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:52.556 09:26:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.556 09:26:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 ************************************ 00:12:52.556 START TEST rpc_plugins 00:12:52.556 ************************************ 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:12:52.556 { 00:12:52.556 "name": "Malloc1", 00:12:52.556 "aliases": [ 00:12:52.556 "d59c8ff8-aa57-499f-9ec4-55c21e4adad1" 00:12:52.556 ], 00:12:52.556 "product_name": "Malloc disk", 00:12:52.556 "block_size": 4096, 00:12:52.556 "num_blocks": 256, 00:12:52.556 "uuid": "d59c8ff8-aa57-499f-9ec4-55c21e4adad1", 00:12:52.556 "assigned_rate_limits": { 00:12:52.556 "rw_ios_per_sec": 0, 00:12:52.556 "rw_mbytes_per_sec": 0, 00:12:52.556 "r_mbytes_per_sec": 0, 00:12:52.556 "w_mbytes_per_sec": 0 00:12:52.556 }, 00:12:52.556 "claimed": false, 00:12:52.556 "zoned": false, 00:12:52.556 "supported_io_types": { 00:12:52.556 "read": true, 00:12:52.556 "write": true, 00:12:52.556 "unmap": true, 00:12:52.556 "flush": true, 00:12:52.556 "reset": true, 00:12:52.556 "nvme_admin": false, 00:12:52.556 "nvme_io": false, 00:12:52.556 "nvme_io_md": false, 00:12:52.556 "write_zeroes": true, 00:12:52.556 "zcopy": true, 00:12:52.556 "get_zone_info": false, 00:12:52.556 "zone_management": false, 00:12:52.556 "zone_append": false, 00:12:52.556 "compare": false, 00:12:52.556 "compare_and_write": false, 00:12:52.556 "abort": true, 00:12:52.556 "seek_hole": false, 00:12:52.556 "seek_data": false, 00:12:52.556 "copy": true, 00:12:52.556 "nvme_iov_md": false 00:12:52.556 }, 00:12:52.556 "memory_domains": [ 00:12:52.556 { 00:12:52.556 "dma_device_id": "system", 00:12:52.556 "dma_device_type": 1 00:12:52.556 }, 00:12:52.556 { 00:12:52.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.556 "dma_device_type": 2 00:12:52.556 } 00:12:52.556 ], 00:12:52.556 "driver_specific": {} 00:12:52.556 } 00:12:52.556 ]' 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:52.556 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:12:52.556 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:12:52.816 09:26:53 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:12:52.816 00:12:52.816 real 0m0.156s 00:12:52.816 user 0m0.079s 00:12:52.816 sys 0m0.030s 00:12:52.816 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.816 09:26:53 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:12:52.816 ************************************ 00:12:52.816 END TEST rpc_plugins 00:12:52.816 ************************************ 00:12:52.816 09:26:53 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:12:52.816 09:26:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:52.816 09:26:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.816 09:26:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.816 ************************************ 00:12:52.816 START TEST rpc_trace_cmd_test 00:12:52.816 ************************************ 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:12:52.816 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid62295", 00:12:52.816 "tpoint_group_mask": "0x8", 00:12:52.816 "iscsi_conn": { 00:12:52.816 "mask": "0x2", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "scsi": { 00:12:52.816 "mask": "0x4", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "bdev": { 00:12:52.816 "mask": "0x8", 00:12:52.816 "tpoint_mask": "0xffffffffffffffff" 00:12:52.816 }, 00:12:52.816 "nvmf_rdma": { 00:12:52.816 "mask": "0x10", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "nvmf_tcp": { 00:12:52.816 "mask": "0x20", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "ftl": { 00:12:52.816 "mask": "0x40", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "blobfs": { 00:12:52.816 "mask": "0x80", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "dsa": { 00:12:52.816 "mask": "0x200", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "thread": { 00:12:52.816 "mask": "0x400", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "nvme_pcie": { 00:12:52.816 "mask": "0x800", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "iaa": { 00:12:52.816 "mask": "0x1000", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "nvme_tcp": { 00:12:52.816 "mask": "0x2000", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "bdev_nvme": { 00:12:52.816 "mask": "0x4000", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 }, 00:12:52.816 "sock": { 00:12:52.816 "mask": "0x8000", 00:12:52.816 "tpoint_mask": "0x0" 00:12:52.816 } 00:12:52.816 }' 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:12:52.816 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:12:53.075 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:12:53.075 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:12:53.075 ************************************ 00:12:53.075 END TEST rpc_trace_cmd_test 00:12:53.075 ************************************ 00:12:53.075 09:26:53 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:12:53.075 00:12:53.075 real 0m0.243s 00:12:53.075 user 0m0.199s 00:12:53.075 sys 0m0.034s 00:12:53.075 09:26:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.075 09:26:53 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.075 09:26:53 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:12:53.075 09:26:53 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:12:53.075 09:26:53 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:12:53.075 09:26:53 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:53.075 09:26:53 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.075 09:26:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:53.075 ************************************ 00:12:53.075 START TEST rpc_daemon_integrity 00:12:53.075 ************************************ 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.075 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:12:53.075 { 00:12:53.075 "name": "Malloc2", 00:12:53.075 "aliases": [ 00:12:53.075 "5fd2cead-6771-4561-b168-53497e7f9cc1" 00:12:53.075 ], 00:12:53.075 "product_name": "Malloc disk", 00:12:53.075 "block_size": 512, 00:12:53.075 "num_blocks": 16384, 00:12:53.076 "uuid": "5fd2cead-6771-4561-b168-53497e7f9cc1", 00:12:53.076 "assigned_rate_limits": { 00:12:53.076 "rw_ios_per_sec": 0, 00:12:53.076 "rw_mbytes_per_sec": 0, 00:12:53.076 "r_mbytes_per_sec": 0, 00:12:53.076 "w_mbytes_per_sec": 0 00:12:53.076 }, 00:12:53.076 "claimed": false, 00:12:53.076 "zoned": false, 00:12:53.076 "supported_io_types": { 00:12:53.076 "read": true, 00:12:53.076 "write": true, 00:12:53.076 "unmap": true, 00:12:53.076 "flush": true, 00:12:53.076 "reset": true, 00:12:53.076 "nvme_admin": false, 00:12:53.076 "nvme_io": false, 00:12:53.076 "nvme_io_md": false, 00:12:53.076 "write_zeroes": true, 00:12:53.076 "zcopy": true, 00:12:53.076 "get_zone_info": false, 00:12:53.076 "zone_management": false, 00:12:53.076 "zone_append": false, 00:12:53.076 "compare": false, 00:12:53.076 "compare_and_write": false, 00:12:53.076 "abort": true, 00:12:53.076 "seek_hole": false, 00:12:53.076 "seek_data": false, 00:12:53.076 "copy": true, 00:12:53.076 "nvme_iov_md": false 00:12:53.076 }, 00:12:53.076 "memory_domains": [ 00:12:53.076 { 00:12:53.076 "dma_device_id": "system", 00:12:53.076 "dma_device_type": 1 00:12:53.076 }, 00:12:53.076 { 00:12:53.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.076 "dma_device_type": 2 00:12:53.076 } 00:12:53.076 ], 00:12:53.076 "driver_specific": {} 00:12:53.076 } 00:12:53.076 ]' 00:12:53.076 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:12:53.335 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:12:53.335 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:12:53.335 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.335 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 [2024-07-25 09:26:53.699203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:12:53.335 [2024-07-25 09:26:53.699287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.335 [2024-07-25 09:26:53.699317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:53.335 [2024-07-25 09:26:53.699327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.335 [2024-07-25 09:26:53.701774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.336 [2024-07-25 09:26:53.701813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:12:53.336 Passthru0 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:12:53.336 { 00:12:53.336 "name": "Malloc2", 00:12:53.336 "aliases": [ 00:12:53.336 "5fd2cead-6771-4561-b168-53497e7f9cc1" 00:12:53.336 ], 00:12:53.336 "product_name": "Malloc disk", 00:12:53.336 "block_size": 512, 00:12:53.336 "num_blocks": 16384, 00:12:53.336 "uuid": "5fd2cead-6771-4561-b168-53497e7f9cc1", 00:12:53.336 "assigned_rate_limits": { 00:12:53.336 "rw_ios_per_sec": 0, 00:12:53.336 "rw_mbytes_per_sec": 0, 00:12:53.336 "r_mbytes_per_sec": 0, 00:12:53.336 "w_mbytes_per_sec": 0 00:12:53.336 }, 00:12:53.336 "claimed": true, 00:12:53.336 "claim_type": "exclusive_write", 00:12:53.336 "zoned": false, 00:12:53.336 "supported_io_types": { 00:12:53.336 "read": true, 00:12:53.336 "write": true, 00:12:53.336 "unmap": true, 00:12:53.336 "flush": true, 00:12:53.336 "reset": true, 00:12:53.336 "nvme_admin": false, 00:12:53.336 "nvme_io": false, 00:12:53.336 "nvme_io_md": false, 00:12:53.336 "write_zeroes": true, 00:12:53.336 "zcopy": true, 00:12:53.336 "get_zone_info": false, 00:12:53.336 "zone_management": false, 00:12:53.336 "zone_append": false, 00:12:53.336 "compare": false, 00:12:53.336 "compare_and_write": false, 00:12:53.336 "abort": true, 00:12:53.336 "seek_hole": false, 00:12:53.336 "seek_data": false, 00:12:53.336 "copy": true, 00:12:53.336 "nvme_iov_md": false 00:12:53.336 }, 00:12:53.336 "memory_domains": [ 00:12:53.336 { 00:12:53.336 "dma_device_id": "system", 00:12:53.336 "dma_device_type": 1 00:12:53.336 }, 00:12:53.336 { 00:12:53.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.336 "dma_device_type": 2 00:12:53.336 } 00:12:53.336 ], 00:12:53.336 "driver_specific": {} 00:12:53.336 }, 00:12:53.336 { 00:12:53.336 "name": "Passthru0", 00:12:53.336 "aliases": [ 00:12:53.336 "54e5c323-f304-5d3a-b91a-bfe166b96216" 00:12:53.336 ], 00:12:53.336 "product_name": "passthru", 00:12:53.336 "block_size": 512, 00:12:53.336 "num_blocks": 16384, 00:12:53.336 "uuid": "54e5c323-f304-5d3a-b91a-bfe166b96216", 00:12:53.336 "assigned_rate_limits": { 00:12:53.336 "rw_ios_per_sec": 0, 00:12:53.336 "rw_mbytes_per_sec": 0, 00:12:53.336 "r_mbytes_per_sec": 0, 00:12:53.336 "w_mbytes_per_sec": 0 00:12:53.336 }, 00:12:53.336 "claimed": false, 00:12:53.336 "zoned": false, 00:12:53.336 "supported_io_types": { 00:12:53.336 "read": true, 00:12:53.336 "write": true, 00:12:53.336 "unmap": true, 00:12:53.336 "flush": true, 00:12:53.336 "reset": true, 00:12:53.336 "nvme_admin": false, 00:12:53.336 "nvme_io": false, 00:12:53.336 "nvme_io_md": false, 00:12:53.336 "write_zeroes": true, 00:12:53.336 "zcopy": true, 00:12:53.336 "get_zone_info": false, 00:12:53.336 "zone_management": false, 00:12:53.336 "zone_append": false, 00:12:53.336 "compare": false, 00:12:53.336 "compare_and_write": false, 00:12:53.336 "abort": true, 00:12:53.336 "seek_hole": false, 00:12:53.336 "seek_data": false, 00:12:53.336 "copy": true, 00:12:53.336 "nvme_iov_md": false 00:12:53.336 }, 00:12:53.336 "memory_domains": [ 00:12:53.336 { 00:12:53.336 "dma_device_id": "system", 00:12:53.336 "dma_device_type": 1 00:12:53.336 }, 00:12:53.336 { 00:12:53.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.336 "dma_device_type": 2 00:12:53.336 } 00:12:53.336 ], 00:12:53.336 "driver_specific": { 00:12:53.336 "passthru": { 00:12:53.336 "name": "Passthru0", 00:12:53.336 "base_bdev_name": "Malloc2" 00:12:53.336 } 00:12:53.336 } 00:12:53.336 } 00:12:53.336 ]' 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:12:53.336 ************************************ 00:12:53.336 END TEST rpc_daemon_integrity 00:12:53.336 ************************************ 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:12:53.336 00:12:53.336 real 0m0.317s 00:12:53.336 user 0m0.169s 00:12:53.336 sys 0m0.040s 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.336 09:26:53 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:12:53.336 09:26:53 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:12:53.336 09:26:53 rpc -- rpc/rpc.sh@84 -- # killprocess 62295 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@950 -- # '[' -z 62295 ']' 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@954 -- # kill -0 62295 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@955 -- # uname 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62295 00:12:53.336 killing process with pid 62295 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62295' 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@969 -- # kill 62295 00:12:53.336 09:26:53 rpc -- common/autotest_common.sh@974 -- # wait 62295 00:12:56.625 00:12:56.625 real 0m5.524s 00:12:56.625 user 0m6.055s 00:12:56.625 sys 0m0.826s 00:12:56.625 ************************************ 00:12:56.625 END TEST rpc 00:12:56.625 ************************************ 00:12:56.625 09:26:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:56.625 09:26:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.625 09:26:56 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:12:56.625 09:26:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:56.625 09:26:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.625 09:26:56 -- common/autotest_common.sh@10 -- # set +x 00:12:56.625 ************************************ 00:12:56.625 START TEST skip_rpc 00:12:56.625 ************************************ 00:12:56.625 09:26:56 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:12:56.625 * Looking for test storage... 00:12:56.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:12:56.625 09:26:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:56.625 09:26:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:56.625 09:26:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:12:56.625 09:26:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:56.625 09:26:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:56.625 09:26:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.625 ************************************ 00:12:56.625 START TEST skip_rpc 00:12:56.625 ************************************ 00:12:56.625 09:26:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:12:56.625 09:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=62516 00:12:56.625 09:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:12:56.625 09:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:56.625 09:26:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:12:56.625 [2024-07-25 09:26:56.809674] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:56.625 [2024-07-25 09:26:56.809915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:12:56.625 [2024-07-25 09:26:56.973937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.625 [2024-07-25 09:26:57.230081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 62516 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 62516 ']' 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 62516 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62516 00:13:01.900 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:01.901 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:01.901 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62516' 00:13:01.901 killing process with pid 62516 00:13:01.901 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 62516 00:13:01.901 09:27:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 62516 00:13:03.808 00:13:03.808 real 0m7.615s 00:13:03.808 user 0m7.141s 00:13:03.808 sys 0m0.388s 00:13:03.808 ************************************ 00:13:03.808 END TEST skip_rpc 00:13:03.808 ************************************ 00:13:03.808 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.808 09:27:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.808 09:27:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:13:03.808 09:27:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:03.808 09:27:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.808 09:27:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.808 ************************************ 00:13:03.808 START TEST skip_rpc_with_json 00:13:03.808 ************************************ 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62626 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 62626 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 62626 ']' 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:03.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:03.808 09:27:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:04.067 [2024-07-25 09:27:04.485390] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:04.067 [2024-07-25 09:27:04.485518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62626 ] 00:13:04.067 [2024-07-25 09:27:04.646752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.327 [2024-07-25 09:27:04.892218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 [2024-07-25 09:27:05.842399] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:13:05.264 request: 00:13:05.264 { 00:13:05.264 "trtype": "tcp", 00:13:05.264 "method": "nvmf_get_transports", 00:13:05.264 "req_id": 1 00:13:05.264 } 00:13:05.264 Got JSON-RPC error response 00:13:05.264 response: 00:13:05.264 { 00:13:05.264 "code": -19, 00:13:05.264 "message": "No such device" 00:13:05.264 } 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:05.264 [2024-07-25 09:27:05.854468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.264 09:27:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:05.525 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.525 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:05.525 { 00:13:05.525 "subsystems": [ 00:13:05.525 { 00:13:05.525 "subsystem": "keyring", 00:13:05.525 "config": [] 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "subsystem": "iobuf", 00:13:05.525 "config": [ 00:13:05.525 { 00:13:05.525 "method": "iobuf_set_options", 00:13:05.525 "params": { 00:13:05.525 "small_pool_count": 8192, 00:13:05.525 "large_pool_count": 1024, 00:13:05.525 "small_bufsize": 8192, 00:13:05.525 "large_bufsize": 135168 00:13:05.525 } 00:13:05.525 } 00:13:05.525 ] 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "subsystem": "sock", 00:13:05.525 "config": [ 00:13:05.525 { 00:13:05.525 "method": "sock_set_default_impl", 00:13:05.525 "params": { 00:13:05.525 "impl_name": "posix" 00:13:05.525 } 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "method": "sock_impl_set_options", 00:13:05.525 "params": { 00:13:05.525 "impl_name": "ssl", 00:13:05.525 "recv_buf_size": 4096, 00:13:05.525 "send_buf_size": 4096, 00:13:05.525 "enable_recv_pipe": true, 00:13:05.525 "enable_quickack": false, 00:13:05.525 "enable_placement_id": 0, 00:13:05.525 "enable_zerocopy_send_server": true, 00:13:05.525 "enable_zerocopy_send_client": false, 00:13:05.525 "zerocopy_threshold": 0, 00:13:05.525 "tls_version": 0, 00:13:05.525 "enable_ktls": false 00:13:05.525 } 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "method": "sock_impl_set_options", 00:13:05.525 "params": { 00:13:05.525 "impl_name": "posix", 00:13:05.525 "recv_buf_size": 2097152, 00:13:05.525 "send_buf_size": 2097152, 00:13:05.525 "enable_recv_pipe": true, 00:13:05.525 "enable_quickack": false, 00:13:05.525 "enable_placement_id": 0, 00:13:05.525 "enable_zerocopy_send_server": true, 00:13:05.525 "enable_zerocopy_send_client": false, 00:13:05.525 "zerocopy_threshold": 0, 00:13:05.525 "tls_version": 0, 00:13:05.525 "enable_ktls": false 00:13:05.525 } 00:13:05.525 } 00:13:05.525 ] 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "subsystem": "vmd", 00:13:05.525 "config": [] 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "subsystem": "accel", 00:13:05.525 "config": [ 00:13:05.525 { 00:13:05.525 "method": "accel_set_options", 00:13:05.525 "params": { 00:13:05.525 "small_cache_size": 128, 00:13:05.525 "large_cache_size": 16, 00:13:05.525 "task_count": 2048, 00:13:05.525 "sequence_count": 2048, 00:13:05.525 "buf_count": 2048 00:13:05.525 } 00:13:05.525 } 00:13:05.525 ] 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "subsystem": "bdev", 00:13:05.525 "config": [ 00:13:05.525 { 00:13:05.525 "method": "bdev_set_options", 00:13:05.525 "params": { 00:13:05.525 "bdev_io_pool_size": 65535, 00:13:05.525 "bdev_io_cache_size": 256, 00:13:05.525 "bdev_auto_examine": true, 00:13:05.525 "iobuf_small_cache_size": 128, 00:13:05.525 "iobuf_large_cache_size": 16 00:13:05.525 } 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "method": "bdev_raid_set_options", 00:13:05.525 "params": { 00:13:05.525 "process_window_size_kb": 1024, 00:13:05.525 "process_max_bandwidth_mb_sec": 0 00:13:05.525 } 00:13:05.525 }, 00:13:05.525 { 00:13:05.525 "method": "bdev_iscsi_set_options", 00:13:05.526 "params": { 00:13:05.526 "timeout_sec": 30 00:13:05.526 } 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "method": "bdev_nvme_set_options", 00:13:05.526 "params": { 00:13:05.526 "action_on_timeout": "none", 00:13:05.526 "timeout_us": 0, 00:13:05.526 "timeout_admin_us": 0, 00:13:05.526 "keep_alive_timeout_ms": 10000, 00:13:05.526 "arbitration_burst": 0, 00:13:05.526 "low_priority_weight": 0, 00:13:05.526 "medium_priority_weight": 0, 00:13:05.526 "high_priority_weight": 0, 00:13:05.526 "nvme_adminq_poll_period_us": 10000, 00:13:05.526 "nvme_ioq_poll_period_us": 0, 00:13:05.526 "io_queue_requests": 0, 00:13:05.526 "delay_cmd_submit": true, 00:13:05.526 "transport_retry_count": 4, 00:13:05.526 "bdev_retry_count": 3, 00:13:05.526 "transport_ack_timeout": 0, 00:13:05.526 "ctrlr_loss_timeout_sec": 0, 00:13:05.526 "reconnect_delay_sec": 0, 00:13:05.526 "fast_io_fail_timeout_sec": 0, 00:13:05.526 "disable_auto_failback": false, 00:13:05.526 "generate_uuids": false, 00:13:05.526 "transport_tos": 0, 00:13:05.526 "nvme_error_stat": false, 00:13:05.526 "rdma_srq_size": 0, 00:13:05.526 "io_path_stat": false, 00:13:05.526 "allow_accel_sequence": false, 00:13:05.526 "rdma_max_cq_size": 0, 00:13:05.526 "rdma_cm_event_timeout_ms": 0, 00:13:05.526 "dhchap_digests": [ 00:13:05.526 "sha256", 00:13:05.526 "sha384", 00:13:05.526 "sha512" 00:13:05.526 ], 00:13:05.526 "dhchap_dhgroups": [ 00:13:05.526 "null", 00:13:05.526 "ffdhe2048", 00:13:05.526 "ffdhe3072", 00:13:05.526 "ffdhe4096", 00:13:05.526 "ffdhe6144", 00:13:05.526 "ffdhe8192" 00:13:05.526 ] 00:13:05.526 } 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "method": "bdev_nvme_set_hotplug", 00:13:05.526 "params": { 00:13:05.526 "period_us": 100000, 00:13:05.526 "enable": false 00:13:05.526 } 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "method": "bdev_wait_for_examine" 00:13:05.526 } 00:13:05.526 ] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "scsi", 00:13:05.526 "config": null 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "scheduler", 00:13:05.526 "config": [ 00:13:05.526 { 00:13:05.526 "method": "framework_set_scheduler", 00:13:05.526 "params": { 00:13:05.526 "name": "static" 00:13:05.526 } 00:13:05.526 } 00:13:05.526 ] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "vhost_scsi", 00:13:05.526 "config": [] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "vhost_blk", 00:13:05.526 "config": [] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "ublk", 00:13:05.526 "config": [] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "nbd", 00:13:05.526 "config": [] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "nvmf", 00:13:05.526 "config": [ 00:13:05.526 { 00:13:05.526 "method": "nvmf_set_config", 00:13:05.526 "params": { 00:13:05.526 "discovery_filter": "match_any", 00:13:05.526 "admin_cmd_passthru": { 00:13:05.526 "identify_ctrlr": false 00:13:05.526 } 00:13:05.526 } 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "method": "nvmf_set_max_subsystems", 00:13:05.526 "params": { 00:13:05.526 "max_subsystems": 1024 00:13:05.526 } 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "method": "nvmf_set_crdt", 00:13:05.526 "params": { 00:13:05.526 "crdt1": 0, 00:13:05.526 "crdt2": 0, 00:13:05.526 "crdt3": 0 00:13:05.526 } 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "method": "nvmf_create_transport", 00:13:05.526 "params": { 00:13:05.526 "trtype": "TCP", 00:13:05.526 "max_queue_depth": 128, 00:13:05.526 "max_io_qpairs_per_ctrlr": 127, 00:13:05.526 "in_capsule_data_size": 4096, 00:13:05.526 "max_io_size": 131072, 00:13:05.526 "io_unit_size": 131072, 00:13:05.526 "max_aq_depth": 128, 00:13:05.526 "num_shared_buffers": 511, 00:13:05.526 "buf_cache_size": 4294967295, 00:13:05.526 "dif_insert_or_strip": false, 00:13:05.526 "zcopy": false, 00:13:05.526 "c2h_success": true, 00:13:05.526 "sock_priority": 0, 00:13:05.526 "abort_timeout_sec": 1, 00:13:05.526 "ack_timeout": 0, 00:13:05.526 "data_wr_pool_size": 0 00:13:05.526 } 00:13:05.526 } 00:13:05.526 ] 00:13:05.526 }, 00:13:05.526 { 00:13:05.526 "subsystem": "iscsi", 00:13:05.526 "config": [ 00:13:05.526 { 00:13:05.526 "method": "iscsi_set_options", 00:13:05.526 "params": { 00:13:05.526 "node_base": "iqn.2016-06.io.spdk", 00:13:05.526 "max_sessions": 128, 00:13:05.526 "max_connections_per_session": 2, 00:13:05.526 "max_queue_depth": 64, 00:13:05.526 "default_time2wait": 2, 00:13:05.526 "default_time2retain": 20, 00:13:05.526 "first_burst_length": 8192, 00:13:05.526 "immediate_data": true, 00:13:05.526 "allow_duplicated_isid": false, 00:13:05.526 "error_recovery_level": 0, 00:13:05.526 "nop_timeout": 60, 00:13:05.526 "nop_in_interval": 30, 00:13:05.526 "disable_chap": false, 00:13:05.526 "require_chap": false, 00:13:05.526 "mutual_chap": false, 00:13:05.526 "chap_group": 0, 00:13:05.526 "max_large_datain_per_connection": 64, 00:13:05.526 "max_r2t_per_connection": 4, 00:13:05.526 "pdu_pool_size": 36864, 00:13:05.526 "immediate_data_pool_size": 16384, 00:13:05.526 "data_out_pool_size": 2048 00:13:05.526 } 00:13:05.526 } 00:13:05.526 ] 00:13:05.526 } 00:13:05.526 ] 00:13:05.526 } 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 62626 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62626 ']' 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62626 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62626 00:13:05.526 killing process with pid 62626 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62626' 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62626 00:13:05.526 09:27:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62626 00:13:08.827 09:27:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62687 00:13:08.827 09:27:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:08.827 09:27:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 62687 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 62687 ']' 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 62687 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62687 00:13:14.111 killing process with pid 62687 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62687' 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 62687 00:13:14.111 09:27:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 62687 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:13:16.038 ************************************ 00:13:16.038 END TEST skip_rpc_with_json 00:13:16.038 ************************************ 00:13:16.038 00:13:16.038 real 0m12.010s 00:13:16.038 user 0m11.450s 00:13:16.038 sys 0m0.777s 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:13:16.038 09:27:16 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:13:16.038 09:27:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:16.038 09:27:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.038 09:27:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.038 ************************************ 00:13:16.038 START TEST skip_rpc_with_delay 00:13:16.038 ************************************ 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:13:16.038 [2024-07-25 09:27:16.567750] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:13:16.038 [2024-07-25 09:27:16.567911] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:13:16.038 ************************************ 00:13:16.038 END TEST skip_rpc_with_delay 00:13:16.038 ************************************ 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:16.038 00:13:16.038 real 0m0.181s 00:13:16.038 user 0m0.107s 00:13:16.038 sys 0m0.072s 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.038 09:27:16 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:13:16.298 09:27:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:13:16.298 09:27:16 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:13:16.298 09:27:16 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:13:16.298 09:27:16 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:16.298 09:27:16 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:16.298 09:27:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.298 ************************************ 00:13:16.298 START TEST exit_on_failed_rpc_init 00:13:16.298 ************************************ 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62821 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 62821 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 62821 ']' 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:16.298 09:27:16 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:16.298 [2024-07-25 09:27:16.802913] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:16.298 [2024-07-25 09:27:16.803036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62821 ] 00:13:16.558 [2024-07-25 09:27:16.966983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.817 [2024-07-25 09:27:17.204758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:13:17.755 09:27:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:13:17.755 [2024-07-25 09:27:18.307210] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:17.755 [2024-07-25 09:27:18.307451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62844 ] 00:13:18.058 [2024-07-25 09:27:18.472238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.320 [2024-07-25 09:27:18.723516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.320 [2024-07-25 09:27:18.723701] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:13:18.320 [2024-07-25 09:27:18.723766] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:13:18.320 [2024-07-25 09:27:18.723822] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 62821 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 62821 ']' 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 62821 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:13:18.889 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.890 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62821 00:13:18.890 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.890 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.890 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62821' 00:13:18.890 killing process with pid 62821 00:13:18.890 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 62821 00:13:18.890 09:27:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 62821 00:13:21.427 00:13:21.427 real 0m5.111s 00:13:21.427 user 0m5.763s 00:13:21.427 sys 0m0.563s 00:13:21.427 09:27:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.427 ************************************ 00:13:21.427 END TEST exit_on_failed_rpc_init 00:13:21.427 09:27:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:13:21.427 ************************************ 00:13:21.427 09:27:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:13:21.427 00:13:21.427 real 0m25.291s 00:13:21.427 user 0m24.594s 00:13:21.427 sys 0m2.058s 00:13:21.427 ************************************ 00:13:21.427 END TEST skip_rpc 00:13:21.427 ************************************ 00:13:21.427 09:27:21 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.427 09:27:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.427 09:27:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:21.427 09:27:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:21.427 09:27:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.427 09:27:21 -- common/autotest_common.sh@10 -- # set +x 00:13:21.427 ************************************ 00:13:21.427 START TEST rpc_client 00:13:21.427 ************************************ 00:13:21.427 09:27:21 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:13:21.427 * Looking for test storage... 00:13:21.686 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:13:21.686 09:27:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:13:21.687 OK 00:13:21.687 09:27:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:13:21.687 00:13:21.687 real 0m0.192s 00:13:21.687 user 0m0.081s 00:13:21.687 sys 0m0.121s 00:13:21.687 09:27:22 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.687 09:27:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:13:21.687 ************************************ 00:13:21.687 END TEST rpc_client 00:13:21.687 ************************************ 00:13:21.687 09:27:22 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:21.687 09:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:21.687 09:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.687 09:27:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.687 ************************************ 00:13:21.687 START TEST json_config 00:13:21.687 ************************************ 00:13:21.687 09:27:22 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:13:21.687 09:27:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3d617ecc-e11c-4945-98e2-f53b121c839e 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3d617ecc-e11c-4945-98e2-f53b121c839e 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.687 09:27:22 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.687 09:27:22 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.687 09:27:22 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.687 09:27:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.687 09:27:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.687 09:27:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.687 09:27:22 json_config -- paths/export.sh@5 -- # export PATH 00:13:21.687 09:27:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@47 -- # : 0 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.687 09:27:22 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:13:21.947 WARNING: No tests are enabled so not running JSON configuration tests 00:13:21.947 09:27:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:13:21.947 00:13:21.947 real 0m0.118s 00:13:21.947 user 0m0.059s 00:13:21.947 sys 0m0.058s 00:13:21.947 ************************************ 00:13:21.947 END TEST json_config 00:13:21.947 ************************************ 00:13:21.947 09:27:22 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.947 09:27:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:13:21.947 09:27:22 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:21.947 09:27:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:21.947 09:27:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.947 09:27:22 -- common/autotest_common.sh@10 -- # set +x 00:13:21.947 ************************************ 00:13:21.947 START TEST json_config_extra_key 00:13:21.947 ************************************ 00:13:21.947 09:27:22 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:13:21.947 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3d617ecc-e11c-4945-98e2-f53b121c839e 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3d617ecc-e11c-4945-98e2-f53b121c839e 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:21.947 09:27:22 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.947 09:27:22 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.947 09:27:22 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.947 09:27:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.947 09:27:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.947 09:27:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.947 09:27:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:13:21.947 09:27:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.947 09:27:22 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.947 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:13:21.947 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:13:21.947 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:13:21.948 INFO: launching applications... 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:13:21.948 09:27:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:13:21.948 Waiting for target to run... 00:13:21.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=63030 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 63030 /var/tmp/spdk_tgt.sock 00:13:21.948 09:27:22 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 63030 ']' 00:13:21.948 09:27:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:13:21.948 09:27:22 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:13:21.948 09:27:22 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.948 09:27:22 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:13:21.948 09:27:22 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.948 09:27:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:22.215 [2024-07-25 09:27:22.592586] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:22.215 [2024-07-25 09:27:22.592717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63030 ] 00:13:22.474 [2024-07-25 09:27:22.981386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.732 [2024-07-25 09:27:23.204593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.665 00:13:23.665 INFO: shutting down applications... 00:13:23.665 09:27:24 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.665 09:27:24 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:13:23.665 09:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:13:23.665 09:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 63030 ]] 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 63030 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:23.665 09:27:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:23.924 09:27:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:23.924 09:27:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:23.924 09:27:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:23.924 09:27:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:24.492 09:27:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:24.492 09:27:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:24.492 09:27:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:24.492 09:27:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:25.061 09:27:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:25.061 09:27:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:25.061 09:27:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:25.061 09:27:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:25.660 09:27:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:25.660 09:27:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:25.660 09:27:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:25.661 09:27:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:26.235 09:27:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:26.235 09:27:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:26.235 09:27:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:26.235 09:27:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:26.495 09:27:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:26.495 09:27:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:26.495 09:27:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:26.495 09:27:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 63030 00:13:27.064 SPDK target shutdown done 00:13:27.064 Success 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:13:27.064 09:27:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:13:27.064 09:27:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:13:27.064 00:13:27.064 real 0m5.200s 00:13:27.064 user 0m4.548s 00:13:27.064 sys 0m0.536s 00:13:27.064 09:27:27 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.064 09:27:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 ************************************ 00:13:27.065 END TEST json_config_extra_key 00:13:27.065 ************************************ 00:13:27.065 09:27:27 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:27.065 09:27:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:27.065 09:27:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.065 09:27:27 -- common/autotest_common.sh@10 -- # set +x 00:13:27.065 ************************************ 00:13:27.065 START TEST alias_rpc 00:13:27.065 ************************************ 00:13:27.065 09:27:27 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:13:27.324 * Looking for test storage... 00:13:27.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:13:27.324 09:27:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:27.324 09:27:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=63140 00:13:27.324 09:27:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:27.324 09:27:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 63140 00:13:27.324 09:27:27 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 63140 ']' 00:13:27.324 09:27:27 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.324 09:27:27 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.324 09:27:27 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.324 09:27:27 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.324 09:27:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.324 [2024-07-25 09:27:27.841052] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:27.324 [2024-07-25 09:27:27.841307] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63140 ] 00:13:27.584 [2024-07-25 09:27:28.004633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.844 [2024-07-25 09:27:28.256074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.782 09:27:29 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.782 09:27:29 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:28.782 09:27:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:13:29.043 09:27:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 63140 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 63140 ']' 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 63140 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63140 00:13:29.043 killing process with pid 63140 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63140' 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@969 -- # kill 63140 00:13:29.043 09:27:29 alias_rpc -- common/autotest_common.sh@974 -- # wait 63140 00:13:31.583 ************************************ 00:13:31.583 END TEST alias_rpc 00:13:31.583 ************************************ 00:13:31.583 00:13:31.583 real 0m4.490s 00:13:31.583 user 0m4.462s 00:13:31.583 sys 0m0.518s 00:13:31.583 09:27:32 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:31.583 09:27:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.583 09:27:32 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:13:31.583 09:27:32 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:31.583 09:27:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:31.583 09:27:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.583 09:27:32 -- common/autotest_common.sh@10 -- # set +x 00:13:31.583 ************************************ 00:13:31.583 START TEST spdkcli_tcp 00:13:31.583 ************************************ 00:13:31.583 09:27:32 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:13:31.842 * Looking for test storage... 00:13:31.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=63245 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:13:31.842 09:27:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 63245 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 63245 ']' 00:13:31.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:31.842 09:27:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:31.842 [2024-07-25 09:27:32.415514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:31.843 [2024-07-25 09:27:32.415657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63245 ] 00:13:32.110 [2024-07-25 09:27:32.572193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:32.369 [2024-07-25 09:27:32.830869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.369 [2024-07-25 09:27:32.830892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.304 09:27:33 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.304 09:27:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:13:33.304 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=63262 00:13:33.305 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:13:33.305 09:27:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:13:33.563 [ 00:13:33.563 "bdev_malloc_delete", 00:13:33.563 "bdev_malloc_create", 00:13:33.563 "bdev_null_resize", 00:13:33.563 "bdev_null_delete", 00:13:33.563 "bdev_null_create", 00:13:33.563 "bdev_nvme_cuse_unregister", 00:13:33.563 "bdev_nvme_cuse_register", 00:13:33.563 "bdev_opal_new_user", 00:13:33.563 "bdev_opal_set_lock_state", 00:13:33.563 "bdev_opal_delete", 00:13:33.563 "bdev_opal_get_info", 00:13:33.563 "bdev_opal_create", 00:13:33.563 "bdev_nvme_opal_revert", 00:13:33.563 "bdev_nvme_opal_init", 00:13:33.563 "bdev_nvme_send_cmd", 00:13:33.563 "bdev_nvme_get_path_iostat", 00:13:33.563 "bdev_nvme_get_mdns_discovery_info", 00:13:33.563 "bdev_nvme_stop_mdns_discovery", 00:13:33.563 "bdev_nvme_start_mdns_discovery", 00:13:33.563 "bdev_nvme_set_multipath_policy", 00:13:33.563 "bdev_nvme_set_preferred_path", 00:13:33.563 "bdev_nvme_get_io_paths", 00:13:33.563 "bdev_nvme_remove_error_injection", 00:13:33.563 "bdev_nvme_add_error_injection", 00:13:33.563 "bdev_nvme_get_discovery_info", 00:13:33.563 "bdev_nvme_stop_discovery", 00:13:33.563 "bdev_nvme_start_discovery", 00:13:33.563 "bdev_nvme_get_controller_health_info", 00:13:33.563 "bdev_nvme_disable_controller", 00:13:33.563 "bdev_nvme_enable_controller", 00:13:33.563 "bdev_nvme_reset_controller", 00:13:33.563 "bdev_nvme_get_transport_statistics", 00:13:33.563 "bdev_nvme_apply_firmware", 00:13:33.563 "bdev_nvme_detach_controller", 00:13:33.563 "bdev_nvme_get_controllers", 00:13:33.563 "bdev_nvme_attach_controller", 00:13:33.563 "bdev_nvme_set_hotplug", 00:13:33.563 "bdev_nvme_set_options", 00:13:33.563 "bdev_passthru_delete", 00:13:33.563 "bdev_passthru_create", 00:13:33.563 "bdev_lvol_set_parent_bdev", 00:13:33.563 "bdev_lvol_set_parent", 00:13:33.563 "bdev_lvol_check_shallow_copy", 00:13:33.563 "bdev_lvol_start_shallow_copy", 00:13:33.563 "bdev_lvol_grow_lvstore", 00:13:33.563 "bdev_lvol_get_lvols", 00:13:33.563 "bdev_lvol_get_lvstores", 00:13:33.563 "bdev_lvol_delete", 00:13:33.563 "bdev_lvol_set_read_only", 00:13:33.563 "bdev_lvol_resize", 00:13:33.563 "bdev_lvol_decouple_parent", 00:13:33.563 "bdev_lvol_inflate", 00:13:33.563 "bdev_lvol_rename", 00:13:33.563 "bdev_lvol_clone_bdev", 00:13:33.563 "bdev_lvol_clone", 00:13:33.563 "bdev_lvol_snapshot", 00:13:33.563 "bdev_lvol_create", 00:13:33.563 "bdev_lvol_delete_lvstore", 00:13:33.563 "bdev_lvol_rename_lvstore", 00:13:33.563 "bdev_lvol_create_lvstore", 00:13:33.563 "bdev_raid_set_options", 00:13:33.563 "bdev_raid_remove_base_bdev", 00:13:33.563 "bdev_raid_add_base_bdev", 00:13:33.563 "bdev_raid_delete", 00:13:33.563 "bdev_raid_create", 00:13:33.563 "bdev_raid_get_bdevs", 00:13:33.563 "bdev_error_inject_error", 00:13:33.563 "bdev_error_delete", 00:13:33.563 "bdev_error_create", 00:13:33.563 "bdev_split_delete", 00:13:33.563 "bdev_split_create", 00:13:33.563 "bdev_delay_delete", 00:13:33.563 "bdev_delay_create", 00:13:33.563 "bdev_delay_update_latency", 00:13:33.563 "bdev_zone_block_delete", 00:13:33.563 "bdev_zone_block_create", 00:13:33.563 "blobfs_create", 00:13:33.563 "blobfs_detect", 00:13:33.563 "blobfs_set_cache_size", 00:13:33.563 "bdev_xnvme_delete", 00:13:33.563 "bdev_xnvme_create", 00:13:33.563 "bdev_aio_delete", 00:13:33.563 "bdev_aio_rescan", 00:13:33.563 "bdev_aio_create", 00:13:33.563 "bdev_ftl_set_property", 00:13:33.563 "bdev_ftl_get_properties", 00:13:33.563 "bdev_ftl_get_stats", 00:13:33.563 "bdev_ftl_unmap", 00:13:33.563 "bdev_ftl_unload", 00:13:33.563 "bdev_ftl_delete", 00:13:33.563 "bdev_ftl_load", 00:13:33.563 "bdev_ftl_create", 00:13:33.563 "bdev_virtio_attach_controller", 00:13:33.563 "bdev_virtio_scsi_get_devices", 00:13:33.563 "bdev_virtio_detach_controller", 00:13:33.563 "bdev_virtio_blk_set_hotplug", 00:13:33.563 "bdev_iscsi_delete", 00:13:33.563 "bdev_iscsi_create", 00:13:33.563 "bdev_iscsi_set_options", 00:13:33.563 "accel_error_inject_error", 00:13:33.563 "ioat_scan_accel_module", 00:13:33.563 "dsa_scan_accel_module", 00:13:33.563 "iaa_scan_accel_module", 00:13:33.563 "keyring_file_remove_key", 00:13:33.563 "keyring_file_add_key", 00:13:33.564 "keyring_linux_set_options", 00:13:33.564 "iscsi_get_histogram", 00:13:33.564 "iscsi_enable_histogram", 00:13:33.564 "iscsi_set_options", 00:13:33.564 "iscsi_get_auth_groups", 00:13:33.564 "iscsi_auth_group_remove_secret", 00:13:33.564 "iscsi_auth_group_add_secret", 00:13:33.564 "iscsi_delete_auth_group", 00:13:33.564 "iscsi_create_auth_group", 00:13:33.564 "iscsi_set_discovery_auth", 00:13:33.564 "iscsi_get_options", 00:13:33.564 "iscsi_target_node_request_logout", 00:13:33.564 "iscsi_target_node_set_redirect", 00:13:33.564 "iscsi_target_node_set_auth", 00:13:33.564 "iscsi_target_node_add_lun", 00:13:33.564 "iscsi_get_stats", 00:13:33.564 "iscsi_get_connections", 00:13:33.564 "iscsi_portal_group_set_auth", 00:13:33.564 "iscsi_start_portal_group", 00:13:33.564 "iscsi_delete_portal_group", 00:13:33.564 "iscsi_create_portal_group", 00:13:33.564 "iscsi_get_portal_groups", 00:13:33.564 "iscsi_delete_target_node", 00:13:33.564 "iscsi_target_node_remove_pg_ig_maps", 00:13:33.564 "iscsi_target_node_add_pg_ig_maps", 00:13:33.564 "iscsi_create_target_node", 00:13:33.564 "iscsi_get_target_nodes", 00:13:33.564 "iscsi_delete_initiator_group", 00:13:33.564 "iscsi_initiator_group_remove_initiators", 00:13:33.564 "iscsi_initiator_group_add_initiators", 00:13:33.564 "iscsi_create_initiator_group", 00:13:33.564 "iscsi_get_initiator_groups", 00:13:33.564 "nvmf_set_crdt", 00:13:33.564 "nvmf_set_config", 00:13:33.564 "nvmf_set_max_subsystems", 00:13:33.564 "nvmf_stop_mdns_prr", 00:13:33.564 "nvmf_publish_mdns_prr", 00:13:33.564 "nvmf_subsystem_get_listeners", 00:13:33.564 "nvmf_subsystem_get_qpairs", 00:13:33.564 "nvmf_subsystem_get_controllers", 00:13:33.564 "nvmf_get_stats", 00:13:33.564 "nvmf_get_transports", 00:13:33.564 "nvmf_create_transport", 00:13:33.564 "nvmf_get_targets", 00:13:33.564 "nvmf_delete_target", 00:13:33.564 "nvmf_create_target", 00:13:33.564 "nvmf_subsystem_allow_any_host", 00:13:33.564 "nvmf_subsystem_remove_host", 00:13:33.564 "nvmf_subsystem_add_host", 00:13:33.564 "nvmf_ns_remove_host", 00:13:33.564 "nvmf_ns_add_host", 00:13:33.564 "nvmf_subsystem_remove_ns", 00:13:33.564 "nvmf_subsystem_add_ns", 00:13:33.564 "nvmf_subsystem_listener_set_ana_state", 00:13:33.564 "nvmf_discovery_get_referrals", 00:13:33.564 "nvmf_discovery_remove_referral", 00:13:33.564 "nvmf_discovery_add_referral", 00:13:33.564 "nvmf_subsystem_remove_listener", 00:13:33.564 "nvmf_subsystem_add_listener", 00:13:33.564 "nvmf_delete_subsystem", 00:13:33.564 "nvmf_create_subsystem", 00:13:33.564 "nvmf_get_subsystems", 00:13:33.564 "env_dpdk_get_mem_stats", 00:13:33.564 "nbd_get_disks", 00:13:33.564 "nbd_stop_disk", 00:13:33.564 "nbd_start_disk", 00:13:33.564 "ublk_recover_disk", 00:13:33.564 "ublk_get_disks", 00:13:33.564 "ublk_stop_disk", 00:13:33.564 "ublk_start_disk", 00:13:33.564 "ublk_destroy_target", 00:13:33.564 "ublk_create_target", 00:13:33.564 "virtio_blk_create_transport", 00:13:33.564 "virtio_blk_get_transports", 00:13:33.564 "vhost_controller_set_coalescing", 00:13:33.564 "vhost_get_controllers", 00:13:33.564 "vhost_delete_controller", 00:13:33.564 "vhost_create_blk_controller", 00:13:33.564 "vhost_scsi_controller_remove_target", 00:13:33.564 "vhost_scsi_controller_add_target", 00:13:33.564 "vhost_start_scsi_controller", 00:13:33.564 "vhost_create_scsi_controller", 00:13:33.564 "thread_set_cpumask", 00:13:33.564 "framework_get_governor", 00:13:33.564 "framework_get_scheduler", 00:13:33.564 "framework_set_scheduler", 00:13:33.564 "framework_get_reactors", 00:13:33.564 "thread_get_io_channels", 00:13:33.564 "thread_get_pollers", 00:13:33.564 "thread_get_stats", 00:13:33.564 "framework_monitor_context_switch", 00:13:33.564 "spdk_kill_instance", 00:13:33.564 "log_enable_timestamps", 00:13:33.564 "log_get_flags", 00:13:33.564 "log_clear_flag", 00:13:33.564 "log_set_flag", 00:13:33.564 "log_get_level", 00:13:33.564 "log_set_level", 00:13:33.564 "log_get_print_level", 00:13:33.564 "log_set_print_level", 00:13:33.564 "framework_enable_cpumask_locks", 00:13:33.564 "framework_disable_cpumask_locks", 00:13:33.564 "framework_wait_init", 00:13:33.564 "framework_start_init", 00:13:33.564 "scsi_get_devices", 00:13:33.564 "bdev_get_histogram", 00:13:33.564 "bdev_enable_histogram", 00:13:33.564 "bdev_set_qos_limit", 00:13:33.564 "bdev_set_qd_sampling_period", 00:13:33.564 "bdev_get_bdevs", 00:13:33.564 "bdev_reset_iostat", 00:13:33.564 "bdev_get_iostat", 00:13:33.564 "bdev_examine", 00:13:33.564 "bdev_wait_for_examine", 00:13:33.564 "bdev_set_options", 00:13:33.564 "notify_get_notifications", 00:13:33.564 "notify_get_types", 00:13:33.564 "accel_get_stats", 00:13:33.564 "accel_set_options", 00:13:33.564 "accel_set_driver", 00:13:33.564 "accel_crypto_key_destroy", 00:13:33.564 "accel_crypto_keys_get", 00:13:33.564 "accel_crypto_key_create", 00:13:33.564 "accel_assign_opc", 00:13:33.564 "accel_get_module_info", 00:13:33.564 "accel_get_opc_assignments", 00:13:33.564 "vmd_rescan", 00:13:33.564 "vmd_remove_device", 00:13:33.564 "vmd_enable", 00:13:33.564 "sock_get_default_impl", 00:13:33.564 "sock_set_default_impl", 00:13:33.564 "sock_impl_set_options", 00:13:33.564 "sock_impl_get_options", 00:13:33.564 "iobuf_get_stats", 00:13:33.564 "iobuf_set_options", 00:13:33.564 "framework_get_pci_devices", 00:13:33.564 "framework_get_config", 00:13:33.564 "framework_get_subsystems", 00:13:33.564 "trace_get_info", 00:13:33.564 "trace_get_tpoint_group_mask", 00:13:33.564 "trace_disable_tpoint_group", 00:13:33.564 "trace_enable_tpoint_group", 00:13:33.564 "trace_clear_tpoint_mask", 00:13:33.564 "trace_set_tpoint_mask", 00:13:33.564 "keyring_get_keys", 00:13:33.564 "spdk_get_version", 00:13:33.564 "rpc_get_methods" 00:13:33.564 ] 00:13:33.564 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:33.564 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:33.564 09:27:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 63245 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 63245 ']' 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 63245 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63245 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63245' 00:13:33.564 killing process with pid 63245 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 63245 00:13:33.564 09:27:34 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 63245 00:13:36.854 00:13:36.854 real 0m4.549s 00:13:36.854 user 0m7.959s 00:13:36.854 sys 0m0.574s 00:13:36.854 09:27:36 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.854 09:27:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.854 ************************************ 00:13:36.854 END TEST spdkcli_tcp 00:13:36.854 ************************************ 00:13:36.854 09:27:36 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:36.854 09:27:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:36.854 09:27:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.854 09:27:36 -- common/autotest_common.sh@10 -- # set +x 00:13:36.854 ************************************ 00:13:36.854 START TEST dpdk_mem_utility 00:13:36.854 ************************************ 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:13:36.854 * Looking for test storage... 00:13:36.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:13:36.854 09:27:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:36.854 09:27:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=63359 00:13:36.854 09:27:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:36.854 09:27:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 63359 00:13:36.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 63359 ']' 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:36.854 09:27:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:36.854 [2024-07-25 09:27:37.014028] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:36.854 [2024-07-25 09:27:37.014143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63359 ] 00:13:36.854 [2024-07-25 09:27:37.179793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.854 [2024-07-25 09:27:37.423710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.809 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.809 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:13:37.809 09:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:13:37.809 09:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:13:37.809 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.809 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:37.809 { 00:13:37.810 "filename": "/tmp/spdk_mem_dump.txt" 00:13:37.810 } 00:13:37.810 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.810 09:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:13:38.082 DPDK memory size 820.000000 MiB in 1 heap(s) 00:13:38.082 1 heaps totaling size 820.000000 MiB 00:13:38.082 size: 820.000000 MiB heap id: 0 00:13:38.082 end heaps---------- 00:13:38.082 8 mempools totaling size 598.116089 MiB 00:13:38.082 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:13:38.082 size: 158.602051 MiB name: PDU_data_out_Pool 00:13:38.082 size: 84.521057 MiB name: bdev_io_63359 00:13:38.082 size: 51.011292 MiB name: evtpool_63359 00:13:38.082 size: 50.003479 MiB name: msgpool_63359 00:13:38.082 size: 21.763794 MiB name: PDU_Pool 00:13:38.082 size: 19.513306 MiB name: SCSI_TASK_Pool 00:13:38.082 size: 0.026123 MiB name: Session_Pool 00:13:38.082 end mempools------- 00:13:38.082 6 memzones totaling size 4.142822 MiB 00:13:38.082 size: 1.000366 MiB name: RG_ring_0_63359 00:13:38.082 size: 1.000366 MiB name: RG_ring_1_63359 00:13:38.082 size: 1.000366 MiB name: RG_ring_4_63359 00:13:38.082 size: 1.000366 MiB name: RG_ring_5_63359 00:13:38.082 size: 0.125366 MiB name: RG_ring_2_63359 00:13:38.082 size: 0.015991 MiB name: RG_ring_3_63359 00:13:38.082 end memzones------- 00:13:38.082 09:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:13:38.082 heap id: 0 total size: 820.000000 MiB number of busy elements: 300 number of free elements: 18 00:13:38.082 list of free elements. size: 18.451538 MiB 00:13:38.082 element at address: 0x200000400000 with size: 1.999451 MiB 00:13:38.082 element at address: 0x200000800000 with size: 1.996887 MiB 00:13:38.082 element at address: 0x200007000000 with size: 1.995972 MiB 00:13:38.082 element at address: 0x20000b200000 with size: 1.995972 MiB 00:13:38.082 element at address: 0x200019100040 with size: 0.999939 MiB 00:13:38.082 element at address: 0x200019500040 with size: 0.999939 MiB 00:13:38.082 element at address: 0x200019600000 with size: 0.999084 MiB 00:13:38.082 element at address: 0x200003e00000 with size: 0.996094 MiB 00:13:38.082 element at address: 0x200032200000 with size: 0.994324 MiB 00:13:38.082 element at address: 0x200018e00000 with size: 0.959656 MiB 00:13:38.082 element at address: 0x200019900040 with size: 0.936401 MiB 00:13:38.082 element at address: 0x200000200000 with size: 0.829956 MiB 00:13:38.082 element at address: 0x20001b000000 with size: 0.564148 MiB 00:13:38.082 element at address: 0x200019200000 with size: 0.487976 MiB 00:13:38.082 element at address: 0x200019a00000 with size: 0.485413 MiB 00:13:38.082 element at address: 0x200013800000 with size: 0.467896 MiB 00:13:38.082 element at address: 0x200028400000 with size: 0.390442 MiB 00:13:38.082 element at address: 0x200003a00000 with size: 0.351990 MiB 00:13:38.082 list of standard malloc elements. size: 199.284058 MiB 00:13:38.082 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:13:38.082 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:13:38.082 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:13:38.082 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:13:38.082 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:13:38.082 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:13:38.082 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:13:38.082 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:13:38.082 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:13:38.082 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:13:38.082 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:13:38.082 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:13:38.082 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003aff980 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003affa80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200003eff000 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013877c80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013877d80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013877e80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013877f80 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013878080 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013878180 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013878280 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013878380 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013878480 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200013878580 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:13:38.083 element at address: 0x200019abc680 with size: 0.000244 MiB 00:13:38.083 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:13:38.084 element at address: 0x200028463f40 with size: 0.000244 MiB 00:13:38.084 element at address: 0x200028464040 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846af80 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b080 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b180 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b280 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b380 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b480 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b580 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b680 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b780 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b880 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846b980 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:13:38.084 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846be80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c080 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c180 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c280 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c380 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c480 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c580 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c680 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c780 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c880 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846c980 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d080 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d180 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d280 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d380 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d480 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d580 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d680 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d780 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d880 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846d980 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846da80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846db80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846de80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846df80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e080 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e180 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e280 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e380 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e480 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e580 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e680 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e780 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e880 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846e980 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f080 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f180 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f280 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f380 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f480 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f580 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f680 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f780 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f880 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846f980 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:13:38.085 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:13:38.085 list of memzone associated elements. size: 602.264404 MiB 00:13:38.085 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:13:38.085 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:13:38.085 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:13:38.085 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:13:38.085 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:13:38.085 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_63359_0 00:13:38.085 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:13:38.085 associated memzone info: size: 48.002930 MiB name: MP_evtpool_63359_0 00:13:38.085 element at address: 0x200003fff340 with size: 48.003113 MiB 00:13:38.085 associated memzone info: size: 48.002930 MiB name: MP_msgpool_63359_0 00:13:38.085 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:13:38.085 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:13:38.085 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:13:38.085 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:13:38.085 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:13:38.085 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_63359 00:13:38.085 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:13:38.085 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_63359 00:13:38.085 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:13:38.085 associated memzone info: size: 1.007996 MiB name: MP_evtpool_63359 00:13:38.085 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:13:38.085 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:13:38.085 element at address: 0x200019abc780 with size: 1.008179 MiB 00:13:38.085 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:13:38.085 element at address: 0x200018efde00 with size: 1.008179 MiB 00:13:38.085 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:13:38.085 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:13:38.085 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:13:38.085 element at address: 0x200003eff100 with size: 1.000549 MiB 00:13:38.085 associated memzone info: size: 1.000366 MiB name: RG_ring_0_63359 00:13:38.086 element at address: 0x200003affb80 with size: 1.000549 MiB 00:13:38.086 associated memzone info: size: 1.000366 MiB name: RG_ring_1_63359 00:13:38.086 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:13:38.086 associated memzone info: size: 1.000366 MiB name: RG_ring_4_63359 00:13:38.086 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:13:38.086 associated memzone info: size: 1.000366 MiB name: RG_ring_5_63359 00:13:38.086 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:13:38.086 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_63359 00:13:38.086 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:13:38.086 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:13:38.086 element at address: 0x200013878680 with size: 0.500549 MiB 00:13:38.086 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:13:38.086 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:13:38.086 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:13:38.086 element at address: 0x200003adf740 with size: 0.125549 MiB 00:13:38.086 associated memzone info: size: 0.125366 MiB name: RG_ring_2_63359 00:13:38.086 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:13:38.086 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:13:38.086 element at address: 0x200028464140 with size: 0.023804 MiB 00:13:38.086 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:13:38.086 element at address: 0x200003adb500 with size: 0.016174 MiB 00:13:38.086 associated memzone info: size: 0.015991 MiB name: RG_ring_3_63359 00:13:38.086 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:13:38.086 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:13:38.086 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:13:38.086 associated memzone info: size: 0.000183 MiB name: MP_msgpool_63359 00:13:38.086 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:13:38.086 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_63359 00:13:38.086 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:13:38.086 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:13:38.086 09:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:13:38.086 09:27:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 63359 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 63359 ']' 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 63359 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63359 00:13:38.086 killing process with pid 63359 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63359' 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 63359 00:13:38.086 09:27:38 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 63359 00:13:40.625 ************************************ 00:13:40.625 END TEST dpdk_mem_utility 00:13:40.625 ************************************ 00:13:40.625 00:13:40.625 real 0m4.411s 00:13:40.625 user 0m4.356s 00:13:40.625 sys 0m0.544s 00:13:40.625 09:27:41 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.625 09:27:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:13:40.884 09:27:41 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:40.884 09:27:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:40.884 09:27:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.884 09:27:41 -- common/autotest_common.sh@10 -- # set +x 00:13:40.884 ************************************ 00:13:40.884 START TEST event 00:13:40.884 ************************************ 00:13:40.884 09:27:41 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:13:40.884 * Looking for test storage... 00:13:40.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:13:40.884 09:27:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:40.884 09:27:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:13:40.884 09:27:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:40.884 09:27:41 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:13:40.884 09:27:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.884 09:27:41 event -- common/autotest_common.sh@10 -- # set +x 00:13:40.884 ************************************ 00:13:40.884 START TEST event_perf 00:13:40.884 ************************************ 00:13:40.884 09:27:41 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:13:40.884 Running I/O for 1 seconds...[2024-07-25 09:27:41.457400] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:40.884 [2024-07-25 09:27:41.457931] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63472 ] 00:13:41.154 [2024-07-25 09:27:41.630207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.413 [2024-07-25 09:27:41.932322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.413 [2024-07-25 09:27:41.932615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.413 Running I/O for 1 seconds...[2024-07-25 09:27:41.932625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.413 [2024-07-25 09:27:41.932501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.322 00:13:43.322 lcore 0: 85812 00:13:43.322 lcore 1: 85815 00:13:43.322 lcore 2: 85814 00:13:43.322 lcore 3: 85810 00:13:43.322 done. 00:13:43.322 00:13:43.322 real 0m2.108s 00:13:43.322 user 0m4.811s 00:13:43.322 sys 0m0.167s 00:13:43.322 09:27:43 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:43.322 09:27:43 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:13:43.322 ************************************ 00:13:43.322 END TEST event_perf 00:13:43.322 ************************************ 00:13:43.322 09:27:43 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:43.322 09:27:43 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:43.322 09:27:43 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.322 09:27:43 event -- common/autotest_common.sh@10 -- # set +x 00:13:43.322 ************************************ 00:13:43.322 START TEST event_reactor 00:13:43.322 ************************************ 00:13:43.322 09:27:43 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:13:43.322 [2024-07-25 09:27:43.609196] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:43.322 [2024-07-25 09:27:43.609987] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63517 ] 00:13:43.322 [2024-07-25 09:27:43.783440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.582 [2024-07-25 09:27:44.098376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.490 test_start 00:13:45.490 oneshot 00:13:45.490 tick 100 00:13:45.490 tick 100 00:13:45.490 tick 250 00:13:45.490 tick 100 00:13:45.490 tick 100 00:13:45.490 tick 250 00:13:45.490 tick 100 00:13:45.490 tick 500 00:13:45.490 tick 100 00:13:45.490 tick 100 00:13:45.490 tick 250 00:13:45.490 tick 100 00:13:45.490 tick 100 00:13:45.490 test_end 00:13:45.490 00:13:45.490 real 0m2.074s 00:13:45.490 user 0m1.825s 00:13:45.490 sys 0m0.137s 00:13:45.490 09:27:45 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.490 09:27:45 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:13:45.490 ************************************ 00:13:45.490 END TEST event_reactor 00:13:45.490 ************************************ 00:13:45.490 09:27:45 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:45.490 09:27:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:45.490 09:27:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.490 09:27:45 event -- common/autotest_common.sh@10 -- # set +x 00:13:45.490 ************************************ 00:13:45.490 START TEST event_reactor_perf 00:13:45.490 ************************************ 00:13:45.490 09:27:45 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:13:45.490 [2024-07-25 09:27:45.762743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:45.490 [2024-07-25 09:27:45.762973] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63559 ] 00:13:45.490 [2024-07-25 09:27:45.922046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.750 [2024-07-25 09:27:46.202141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.179 test_start 00:13:47.179 test_end 00:13:47.179 Performance: 368789 events per second 00:13:47.179 00:13:47.179 real 0m1.992s 00:13:47.179 user 0m1.752s 00:13:47.179 sys 0m0.130s 00:13:47.179 09:27:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:47.179 09:27:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:13:47.179 ************************************ 00:13:47.179 END TEST event_reactor_perf 00:13:47.179 ************************************ 00:13:47.179 09:27:47 event -- event/event.sh@49 -- # uname -s 00:13:47.179 09:27:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:13:47.179 09:27:47 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:47.179 09:27:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:47.179 09:27:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:47.179 09:27:47 event -- common/autotest_common.sh@10 -- # set +x 00:13:47.179 ************************************ 00:13:47.179 START TEST event_scheduler 00:13:47.179 ************************************ 00:13:47.179 09:27:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:13:47.438 * Looking for test storage... 00:13:47.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:13:47.438 09:27:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:13:47.438 09:27:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=63627 00:13:47.438 09:27:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:13:47.438 09:27:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:13:47.438 09:27:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 63627 00:13:47.438 09:27:47 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 63627 ']' 00:13:47.438 09:27:47 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.438 09:27:47 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:47.438 09:27:47 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.438 09:27:47 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:47.438 09:27:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:47.438 [2024-07-25 09:27:47.983360] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:47.438 [2024-07-25 09:27:47.983563] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63627 ] 00:13:47.698 [2024-07-25 09:27:48.148691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.957 [2024-07-25 09:27:48.442607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.957 [2024-07-25 09:27:48.442839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.957 [2024-07-25 09:27:48.442890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.957 [2024-07-25 09:27:48.442944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:13:48.216 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:48.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:48.216 POWER: Cannot set governor of lcore 0 to userspace 00:13:48.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:48.216 POWER: Cannot set governor of lcore 0 to performance 00:13:48.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:48.216 POWER: Cannot set governor of lcore 0 to userspace 00:13:48.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:13:48.216 POWER: Cannot set governor of lcore 0 to userspace 00:13:48.216 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:13:48.216 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:13:48.216 POWER: Unable to set Power Management Environment for lcore 0 00:13:48.216 [2024-07-25 09:27:48.792482] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:13:48.216 [2024-07-25 09:27:48.792502] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:13:48.216 [2024-07-25 09:27:48.792516] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:13:48.216 [2024-07-25 09:27:48.792536] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:13:48.216 [2024-07-25 09:27:48.792547] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:13:48.216 [2024-07-25 09:27:48.792555] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.216 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.216 09:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 [2024-07-25 09:27:49.223385] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:13:48.785 09:27:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:13:48.785 09:27:49 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:48.785 09:27:49 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 ************************************ 00:13:48.785 START TEST scheduler_create_thread 00:13:48.785 ************************************ 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 2 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 3 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 4 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 5 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 6 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 7 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 8 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 9 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 10 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.785 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 09:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.725 09:27:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:13:49.725 09:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.725 09:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:51.134 09:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.134 09:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:13:51.134 09:27:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:13:51.134 09:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.134 09:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:52.073 09:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.073 00:13:52.073 ************************************ 00:13:52.073 END TEST scheduler_create_thread 00:13:52.073 ************************************ 00:13:52.073 real 0m3.382s 00:13:52.073 user 0m0.019s 00:13:52.073 sys 0m0.010s 00:13:52.073 09:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.073 09:27:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:13:52.073 09:27:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:52.073 09:27:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 63627 00:13:52.073 09:27:52 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 63627 ']' 00:13:52.073 09:27:52 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 63627 00:13:52.073 09:27:52 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:13:52.073 09:27:52 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:52.073 09:27:52 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63627 00:13:52.333 killing process with pid 63627 00:13:52.333 09:27:52 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:13:52.333 09:27:52 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:13:52.333 09:27:52 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63627' 00:13:52.333 09:27:52 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 63627 00:13:52.333 09:27:52 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 63627 00:13:52.593 [2024-07-25 09:27:53.002145] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:13:53.975 00:13:53.975 real 0m6.794s 00:13:53.975 user 0m13.458s 00:13:53.975 sys 0m0.575s 00:13:53.975 09:27:54 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.975 ************************************ 00:13:53.975 END TEST event_scheduler 00:13:53.975 ************************************ 00:13:53.975 09:27:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:13:54.235 09:27:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:13:54.235 09:27:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:13:54.235 09:27:54 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:54.235 09:27:54 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.235 09:27:54 event -- common/autotest_common.sh@10 -- # set +x 00:13:54.235 ************************************ 00:13:54.235 START TEST app_repeat 00:13:54.235 ************************************ 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=63747 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:13:54.235 Process app_repeat pid: 63747 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63747' 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:13:54.235 spdk_app_start Round 0 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:13:54.235 09:27:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63747 /var/tmp/spdk-nbd.sock 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63747 ']' 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:54.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.235 09:27:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:13:54.235 [2024-07-25 09:27:54.700615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:54.236 [2024-07-25 09:27:54.700838] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63747 ] 00:13:54.495 [2024-07-25 09:27:54.865681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:54.754 [2024-07-25 09:27:55.155545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.754 [2024-07-25 09:27:55.155590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.014 09:27:55 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:55.014 09:27:55 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:13:55.014 09:27:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:55.273 Malloc0 00:13:55.531 09:27:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:13:55.790 Malloc1 00:13:55.790 09:27:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:55.790 09:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:13:56.049 /dev/nbd0 00:13:56.049 09:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:56.049 09:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:56.050 1+0 records in 00:13:56.050 1+0 records out 00:13:56.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000869875 s, 4.7 MB/s 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.050 09:27:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:13:56.050 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.050 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.050 09:27:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:13:56.309 /dev/nbd1 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:13:56.309 1+0 records in 00:13:56.309 1+0 records out 00:13:56.309 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411028 s, 10.0 MB/s 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:56.309 09:27:56 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.309 09:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:56.568 09:27:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:56.568 { 00:13:56.568 "nbd_device": "/dev/nbd0", 00:13:56.568 "bdev_name": "Malloc0" 00:13:56.568 }, 00:13:56.568 { 00:13:56.568 "nbd_device": "/dev/nbd1", 00:13:56.568 "bdev_name": "Malloc1" 00:13:56.568 } 00:13:56.568 ]' 00:13:56.568 09:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:56.568 { 00:13:56.568 "nbd_device": "/dev/nbd0", 00:13:56.568 "bdev_name": "Malloc0" 00:13:56.568 }, 00:13:56.568 { 00:13:56.568 "nbd_device": "/dev/nbd1", 00:13:56.568 "bdev_name": "Malloc1" 00:13:56.568 } 00:13:56.568 ]' 00:13:56.568 09:27:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:56.568 /dev/nbd1' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:56.568 /dev/nbd1' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:13:56.568 256+0 records in 00:13:56.568 256+0 records out 00:13:56.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135104 s, 77.6 MB/s 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:56.568 256+0 records in 00:13:56.568 256+0 records out 00:13:56.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024262 s, 43.2 MB/s 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:56.568 256+0 records in 00:13:56.568 256+0 records out 00:13:56.568 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0326375 s, 32.1 MB/s 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.568 09:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.827 09:27:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:57.087 09:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:57.346 09:27:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:13:57.346 09:27:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:13:57.914 09:27:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:13:59.846 [2024-07-25 09:28:00.119841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.846 [2024-07-25 09:28:00.428471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.846 [2024-07-25 09:28:00.428474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.415 [2024-07-25 09:28:00.737437] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:00.415 [2024-07-25 09:28:00.737570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:00.983 spdk_app_start Round 1 00:14:00.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:00.983 09:28:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:00.983 09:28:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:14:00.983 09:28:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63747 /var/tmp/spdk-nbd.sock 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63747 ']' 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.983 09:28:01 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:00.983 09:28:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:01.241 Malloc0 00:14:01.241 09:28:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:01.500 Malloc1 00:14:01.760 09:28:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:01.760 /dev/nbd0 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:01.760 1+0 records in 00:14:01.760 1+0 records out 00:14:01.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324971 s, 12.6 MB/s 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:01.760 09:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:01.760 09:28:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:02.020 /dev/nbd1 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:02.020 1+0 records in 00:14:02.020 1+0 records out 00:14:02.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472579 s, 8.7 MB/s 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:02.020 09:28:02 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.020 09:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:02.279 { 00:14:02.279 "nbd_device": "/dev/nbd0", 00:14:02.279 "bdev_name": "Malloc0" 00:14:02.279 }, 00:14:02.279 { 00:14:02.279 "nbd_device": "/dev/nbd1", 00:14:02.279 "bdev_name": "Malloc1" 00:14:02.279 } 00:14:02.279 ]' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:02.279 { 00:14:02.279 "nbd_device": "/dev/nbd0", 00:14:02.279 "bdev_name": "Malloc0" 00:14:02.279 }, 00:14:02.279 { 00:14:02.279 "nbd_device": "/dev/nbd1", 00:14:02.279 "bdev_name": "Malloc1" 00:14:02.279 } 00:14:02.279 ]' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:02.279 /dev/nbd1' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:02.279 /dev/nbd1' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:02.279 256+0 records in 00:14:02.279 256+0 records out 00:14:02.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00466019 s, 225 MB/s 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:02.279 256+0 records in 00:14:02.279 256+0 records out 00:14:02.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274107 s, 38.3 MB/s 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:02.279 256+0 records in 00:14:02.279 256+0 records out 00:14:02.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0346808 s, 30.2 MB/s 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:02.279 09:28:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.280 09:28:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:02.538 09:28:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.797 09:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:03.055 09:28:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:03.055 09:28:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:03.623 09:28:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:05.531 [2024-07-25 09:28:05.791789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:05.531 [2024-07-25 09:28:06.100897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.531 [2024-07-25 09:28:06.100920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.790 [2024-07-25 09:28:06.378121] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:05.790 [2024-07-25 09:28:06.378216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:06.729 spdk_app_start Round 2 00:14:06.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:06.729 09:28:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:14:06.729 09:28:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:14:06.729 09:28:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 63747 /var/tmp/spdk-nbd.sock 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63747 ']' 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.729 09:28:07 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:06.729 09:28:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:06.989 Malloc0 00:14:06.989 09:28:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:14:07.248 Malloc1 00:14:07.248 09:28:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:07.248 09:28:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.248 09:28:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:07.248 09:28:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:07.248 09:28:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.248 09:28:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:07.248 09:28:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.249 09:28:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:14:07.508 /dev/nbd0 00:14:07.508 09:28:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.508 09:28:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:07.508 1+0 records in 00:14:07.508 1+0 records out 00:14:07.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118681 s, 3.5 MB/s 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.508 09:28:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:07.508 09:28:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.508 09:28:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.508 09:28:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:14:07.767 /dev/nbd1 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:14:07.767 1+0 records in 00:14:07.767 1+0 records out 00:14:07.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311563 s, 13.1 MB/s 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.767 09:28:08 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:07.767 09:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:08.028 { 00:14:08.028 "nbd_device": "/dev/nbd0", 00:14:08.028 "bdev_name": "Malloc0" 00:14:08.028 }, 00:14:08.028 { 00:14:08.028 "nbd_device": "/dev/nbd1", 00:14:08.028 "bdev_name": "Malloc1" 00:14:08.028 } 00:14:08.028 ]' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:08.028 { 00:14:08.028 "nbd_device": "/dev/nbd0", 00:14:08.028 "bdev_name": "Malloc0" 00:14:08.028 }, 00:14:08.028 { 00:14:08.028 "nbd_device": "/dev/nbd1", 00:14:08.028 "bdev_name": "Malloc1" 00:14:08.028 } 00:14:08.028 ]' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:08.028 /dev/nbd1' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:08.028 /dev/nbd1' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:14:08.028 256+0 records in 00:14:08.028 256+0 records out 00:14:08.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131705 s, 79.6 MB/s 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:08.028 256+0 records in 00:14:08.028 256+0 records out 00:14:08.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0284391 s, 36.9 MB/s 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:08.028 256+0 records in 00:14:08.028 256+0 records out 00:14:08.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340275 s, 30.8 MB/s 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.028 09:28:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.288 09:28:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:08.548 09:28:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:08.807 09:28:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:14:08.807 09:28:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:14:09.377 09:28:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:14:10.757 [2024-07-25 09:28:11.088434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:10.757 [2024-07-25 09:28:11.321879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.757 [2024-07-25 09:28:11.321884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.014 [2024-07-25 09:28:11.554638] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:14:11.014 [2024-07-25 09:28:11.554708] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:14:12.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:12.428 09:28:12 event.app_repeat -- event/event.sh@38 -- # waitforlisten 63747 /var/tmp/spdk-nbd.sock 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 63747 ']' 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:14:12.428 09:28:12 event.app_repeat -- event/event.sh@39 -- # killprocess 63747 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 63747 ']' 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 63747 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63747 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63747' 00:14:12.428 killing process with pid 63747 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@969 -- # kill 63747 00:14:12.428 09:28:12 event.app_repeat -- common/autotest_common.sh@974 -- # wait 63747 00:14:13.815 spdk_app_start is called in Round 0. 00:14:13.815 Shutdown signal received, stop current app iteration 00:14:13.815 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:14:13.815 spdk_app_start is called in Round 1. 00:14:13.815 Shutdown signal received, stop current app iteration 00:14:13.815 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:14:13.815 spdk_app_start is called in Round 2. 00:14:13.815 Shutdown signal received, stop current app iteration 00:14:13.815 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:14:13.815 spdk_app_start is called in Round 3. 00:14:13.815 Shutdown signal received, stop current app iteration 00:14:13.815 09:28:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:14:13.815 09:28:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:14:13.815 00:14:13.815 real 0m19.634s 00:14:13.815 user 0m39.982s 00:14:13.815 sys 0m2.731s 00:14:13.815 09:28:14 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.815 09:28:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:14:13.815 ************************************ 00:14:13.815 END TEST app_repeat 00:14:13.815 ************************************ 00:14:13.815 09:28:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:14:13.815 09:28:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:13.815 09:28:14 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.815 09:28:14 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.815 09:28:14 event -- common/autotest_common.sh@10 -- # set +x 00:14:13.815 ************************************ 00:14:13.815 START TEST cpu_locks 00:14:13.815 ************************************ 00:14:13.815 09:28:14 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:14:13.815 * Looking for test storage... 00:14:13.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:14:13.815 09:28:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:14:13.815 09:28:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:14:13.815 09:28:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:14:13.815 09:28:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:14:13.815 09:28:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:13.815 09:28:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.815 09:28:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:14.078 ************************************ 00:14:14.078 START TEST default_locks 00:14:14.078 ************************************ 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=64193 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 64193 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64193 ']' 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.078 09:28:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:14.078 [2024-07-25 09:28:14.529205] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:14.078 [2024-07-25 09:28:14.529433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64193 ] 00:14:14.337 [2024-07-25 09:28:14.693453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.337 [2024-07-25 09:28:14.933459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.275 09:28:15 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.275 09:28:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:14:15.275 09:28:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 64193 00:14:15.275 09:28:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 64193 00:14:15.275 09:28:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 64193 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 64193 ']' 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 64193 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64193 00:14:15.843 killing process with pid 64193 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64193' 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 64193 00:14:15.843 09:28:16 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 64193 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 64193 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64193 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 64193 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 64193 ']' 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:18.383 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64193) - No such process 00:14:18.383 ERROR: process (pid: 64193) is no longer running 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:18.383 00:14:18.383 real 0m4.490s 00:14:18.383 user 0m4.397s 00:14:18.383 sys 0m0.644s 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.383 ************************************ 00:14:18.383 END TEST default_locks 00:14:18.383 ************************************ 00:14:18.383 09:28:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:14:18.383 09:28:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:14:18.383 09:28:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:18.383 09:28:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.383 09:28:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:18.383 ************************************ 00:14:18.383 START TEST default_locks_via_rpc 00:14:18.383 ************************************ 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=64268 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 64268 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64268 ']' 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.383 09:28:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.643 [2024-07-25 09:28:19.088859] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:18.643 [2024-07-25 09:28:19.089089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64268 ] 00:14:18.643 [2024-07-25 09:28:19.253707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.903 [2024-07-25 09:28:19.487008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:14:19.841 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.842 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:19.842 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.842 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 64268 00:14:19.842 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:19.842 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 64268 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 64268 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 64268 ']' 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 64268 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64268 00:14:20.101 killing process with pid 64268 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64268' 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 64268 00:14:20.101 09:28:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 64268 00:14:22.644 00:14:22.644 real 0m4.212s 00:14:22.644 user 0m4.168s 00:14:22.644 sys 0m0.561s 00:14:22.644 09:28:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.644 ************************************ 00:14:22.644 END TEST default_locks_via_rpc 00:14:22.644 ************************************ 00:14:22.644 09:28:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.644 09:28:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:14:22.644 09:28:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:22.644 09:28:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.644 09:28:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:22.911 ************************************ 00:14:22.911 START TEST non_locking_app_on_locked_coremask 00:14:22.911 ************************************ 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=64342 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 64342 /var/tmp/spdk.sock 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64342 ']' 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:22.911 09:28:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:22.911 [2024-07-25 09:28:23.357618] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:22.911 [2024-07-25 09:28:23.357812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64342 ] 00:14:22.911 [2024-07-25 09:28:23.522303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.177 [2024-07-25 09:28:23.758374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.117 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.117 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=64364 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 64364 /var/tmp/spdk2.sock 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64364 ']' 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:24.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:24.118 09:28:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:24.378 [2024-07-25 09:28:24.756106] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:24.378 [2024-07-25 09:28:24.756378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64364 ] 00:14:24.378 [2024-07-25 09:28:24.911379] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:24.378 [2024-07-25 09:28:24.911440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.944 [2024-07-25 09:28:25.373358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.851 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.851 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:26.851 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 64342 00:14:26.851 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:26.851 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64342 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 64342 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64342 ']' 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64342 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64342 00:14:27.110 killing process with pid 64342 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64342' 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64342 00:14:27.110 09:28:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64342 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 64364 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64364 ']' 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64364 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64364 00:14:32.396 killing process with pid 64364 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64364' 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64364 00:14:32.396 09:28:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64364 00:14:34.934 00:14:34.934 real 0m11.945s 00:14:34.934 user 0m12.104s 00:14:34.934 sys 0m1.099s 00:14:34.934 09:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.934 09:28:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:34.934 ************************************ 00:14:34.934 END TEST non_locking_app_on_locked_coremask 00:14:34.934 ************************************ 00:14:34.934 09:28:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:14:34.934 09:28:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:34.934 09:28:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.934 09:28:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:34.934 ************************************ 00:14:34.934 START TEST locking_app_on_unlocked_coremask 00:14:34.934 ************************************ 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64512 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 64512 /var/tmp/spdk.sock 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64512 ']' 00:14:34.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:34.934 09:28:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:34.934 [2024-07-25 09:28:35.373099] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:34.934 [2024-07-25 09:28:35.373259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64512 ] 00:14:34.934 [2024-07-25 09:28:35.536679] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:34.934 [2024-07-25 09:28:35.536764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.193 [2024-07-25 09:28:35.774735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64533 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 64533 /var/tmp/spdk2.sock 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64533 ']' 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:36.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:36.129 09:28:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:36.387 [2024-07-25 09:28:36.790413] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:36.387 [2024-07-25 09:28:36.790620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64533 ] 00:14:36.387 [2024-07-25 09:28:36.943994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.964 [2024-07-25 09:28:37.409635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.869 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.869 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:38.869 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 64533 00:14:38.869 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64533 00:14:38.869 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 64512 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64512 ']' 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64512 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64512 00:14:39.127 killing process with pid 64512 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64512' 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64512 00:14:39.127 09:28:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64512 00:14:44.387 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 64533 00:14:44.387 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64533 ']' 00:14:44.387 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 64533 00:14:44.387 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64533 00:14:44.388 killing process with pid 64533 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64533' 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 64533 00:14:44.388 09:28:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 64533 00:14:46.924 ************************************ 00:14:46.924 END TEST locking_app_on_unlocked_coremask 00:14:46.924 ************************************ 00:14:46.924 00:14:46.924 real 0m11.893s 00:14:46.924 user 0m12.082s 00:14:46.924 sys 0m1.091s 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:46.924 09:28:47 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:14:46.924 09:28:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:46.924 09:28:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:46.924 09:28:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:46.924 ************************************ 00:14:46.924 START TEST locking_app_on_locked_coremask 00:14:46.924 ************************************ 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64682 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 64682 /var/tmp/spdk.sock 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64682 ']' 00:14:46.924 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.925 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.925 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.925 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.925 09:28:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:46.925 [2024-07-25 09:28:47.320685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:46.925 [2024-07-25 09:28:47.320904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64682 ] 00:14:46.925 [2024-07-25 09:28:47.485693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.184 [2024-07-25 09:28:47.713863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64703 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64703 /var/tmp/spdk2.sock 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64703 /var/tmp/spdk2.sock 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64703 /var/tmp/spdk2.sock 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 64703 ']' 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:48.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.121 09:28:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:48.121 [2024-07-25 09:28:48.707874] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:48.121 [2024-07-25 09:28:48.708105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64703 ] 00:14:48.380 [2024-07-25 09:28:48.865927] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64682 has claimed it. 00:14:48.380 [2024-07-25 09:28:48.866013] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:48.948 ERROR: process (pid: 64703) is no longer running 00:14:48.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64703) - No such process 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 64682 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 64682 00:14:48.948 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 64682 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 64682 ']' 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 64682 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64682 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:49.205 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64682' 00:14:49.206 killing process with pid 64682 00:14:49.206 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 64682 00:14:49.206 09:28:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 64682 00:14:51.736 00:14:51.736 real 0m4.956s 00:14:51.736 user 0m5.072s 00:14:51.736 sys 0m0.699s 00:14:51.736 09:28:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:51.736 09:28:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:51.736 ************************************ 00:14:51.736 END TEST locking_app_on_locked_coremask 00:14:51.736 ************************************ 00:14:51.736 09:28:52 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:14:51.736 09:28:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:51.736 09:28:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:51.736 09:28:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:51.736 ************************************ 00:14:51.736 START TEST locking_overlapped_coremask 00:14:51.736 ************************************ 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64773 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 64773 /var/tmp/spdk.sock 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64773 ']' 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:51.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.736 09:28:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:51.736 [2024-07-25 09:28:52.342616] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:51.736 [2024-07-25 09:28:52.342821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64773 ] 00:14:51.994 [2024-07-25 09:28:52.506888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:52.253 [2024-07-25 09:28:52.741652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.253 [2024-07-25 09:28:52.741793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.253 [2024-07-25 09:28:52.741826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64791 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64791 /var/tmp/spdk2.sock 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 64791 /var/tmp/spdk2.sock 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 64791 /var/tmp/spdk2.sock 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 64791 ']' 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:53.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.189 09:28:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:53.189 [2024-07-25 09:28:53.769894] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:53.189 [2024-07-25 09:28:53.770116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64791 ] 00:14:53.448 [2024-07-25 09:28:53.930536] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64773 has claimed it. 00:14:53.448 [2024-07-25 09:28:53.930633] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:14:54.083 ERROR: process (pid: 64791) is no longer running 00:14:54.083 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (64791) - No such process 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 64773 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 64773 ']' 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 64773 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64773 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64773' 00:14:54.083 killing process with pid 64773 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 64773 00:14:54.083 09:28:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 64773 00:14:56.618 00:14:56.618 real 0m4.757s 00:14:56.618 user 0m12.389s 00:14:56.618 sys 0m0.531s 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:14:56.618 ************************************ 00:14:56.618 END TEST locking_overlapped_coremask 00:14:56.618 ************************************ 00:14:56.618 09:28:57 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:14:56.618 09:28:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:56.618 09:28:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.618 09:28:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:14:56.618 ************************************ 00:14:56.618 START TEST locking_overlapped_coremask_via_rpc 00:14:56.618 ************************************ 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64859 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 64859 /var/tmp/spdk.sock 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64859 ']' 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:56.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:56.618 09:28:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.618 [2024-07-25 09:28:57.170141] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:56.618 [2024-07-25 09:28:57.170277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64859 ] 00:14:56.877 [2024-07-25 09:28:57.335359] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:56.877 [2024-07-25 09:28:57.335456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.136 [2024-07-25 09:28:57.570481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.136 [2024-07-25 09:28:57.570612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.136 [2024-07-25 09:28:57.570650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64884 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 64884 /var/tmp/spdk2.sock 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64884 ']' 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:14:58.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:58.072 09:28:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.072 [2024-07-25 09:28:58.596158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:58.072 [2024-07-25 09:28:58.596402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64884 ] 00:14:58.331 [2024-07-25 09:28:58.755406] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:14:58.331 [2024-07-25 09:28:58.755481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:58.899 [2024-07-25 09:28:59.234292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:58.899 [2024-07-25 09:28:59.234365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:58.899 [2024-07-25 09:28:59.234391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.809 [2024-07-25 09:29:01.178485] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64859 has claimed it. 00:15:00.809 request: 00:15:00.809 { 00:15:00.809 "method": "framework_enable_cpumask_locks", 00:15:00.809 "req_id": 1 00:15:00.809 } 00:15:00.809 Got JSON-RPC error response 00:15:00.809 response: 00:15:00.809 { 00:15:00.809 "code": -32603, 00:15:00.809 "message": "Failed to claim CPU core: 2" 00:15:00.809 } 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 64859 /var/tmp/spdk.sock 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64859 ']' 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 64884 /var/tmp/spdk2.sock 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 64884 ']' 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:00.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.809 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:15:01.069 00:15:01.069 real 0m4.538s 00:15:01.069 user 0m1.222s 00:15:01.069 sys 0m0.188s 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.069 09:29:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.069 ************************************ 00:15:01.069 END TEST locking_overlapped_coremask_via_rpc 00:15:01.069 ************************************ 00:15:01.069 09:29:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:15:01.069 09:29:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64859 ]] 00:15:01.069 09:29:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64859 00:15:01.069 09:29:01 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64859 ']' 00:15:01.069 09:29:01 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64859 00:15:01.069 09:29:01 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:15:01.069 09:29:01 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.069 09:29:01 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64859 00:15:01.329 killing process with pid 64859 00:15:01.329 09:29:01 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:01.329 09:29:01 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:01.329 09:29:01 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64859' 00:15:01.329 09:29:01 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64859 00:15:01.329 09:29:01 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64859 00:15:03.870 09:29:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64884 ]] 00:15:03.870 09:29:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64884 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64884 ']' 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64884 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64884 00:15:03.870 killing process with pid 64884 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64884' 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 64884 00:15:03.870 09:29:04 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 64884 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:06.435 Process with pid 64859 is not found 00:15:06.435 Process with pid 64884 is not found 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 64859 ]] 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 64859 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64859 ']' 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64859 00:15:06.435 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64859) - No such process 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64859 is not found' 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 64884 ]] 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 64884 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 64884 ']' 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 64884 00:15:06.435 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (64884) - No such process 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 64884 is not found' 00:15:06.435 09:29:06 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:15:06.435 00:15:06.435 real 0m52.622s 00:15:06.435 user 1m28.110s 00:15:06.435 sys 0m5.937s 00:15:06.435 ************************************ 00:15:06.435 END TEST cpu_locks 00:15:06.435 ************************************ 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.435 09:29:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:06.435 00:15:06.435 real 1m25.723s 00:15:06.435 user 2m30.115s 00:15:06.435 sys 0m9.999s 00:15:06.435 09:29:06 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:06.435 09:29:06 event -- common/autotest_common.sh@10 -- # set +x 00:15:06.435 ************************************ 00:15:06.435 END TEST event 00:15:06.435 ************************************ 00:15:06.435 09:29:07 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:06.435 09:29:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:06.435 09:29:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.435 09:29:07 -- common/autotest_common.sh@10 -- # set +x 00:15:06.694 ************************************ 00:15:06.694 START TEST thread 00:15:06.694 ************************************ 00:15:06.694 09:29:07 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:15:06.694 * Looking for test storage... 00:15:06.694 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:15:06.694 09:29:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:06.694 09:29:07 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:15:06.694 09:29:07 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:06.694 09:29:07 thread -- common/autotest_common.sh@10 -- # set +x 00:15:06.694 ************************************ 00:15:06.694 START TEST thread_poller_perf 00:15:06.694 ************************************ 00:15:06.694 09:29:07 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:15:06.694 [2024-07-25 09:29:07.242512] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:06.694 [2024-07-25 09:29:07.242716] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65071 ] 00:15:06.952 [2024-07-25 09:29:07.407873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.210 [2024-07-25 09:29:07.640276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.210 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:15:08.591 ====================================== 00:15:08.591 busy:2298748698 (cyc) 00:15:08.591 total_run_count: 384000 00:15:08.591 tsc_hz: 2290000000 (cyc) 00:15:08.591 ====================================== 00:15:08.591 poller_cost: 5986 (cyc), 2613 (nsec) 00:15:08.591 00:15:08.591 ************************************ 00:15:08.591 END TEST thread_poller_perf 00:15:08.591 ************************************ 00:15:08.591 real 0m1.890s 00:15:08.591 user 0m1.661s 00:15:08.591 sys 0m0.120s 00:15:08.591 09:29:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:08.591 09:29:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:08.591 09:29:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:08.591 09:29:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:15:08.591 09:29:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:08.591 09:29:09 thread -- common/autotest_common.sh@10 -- # set +x 00:15:08.591 ************************************ 00:15:08.591 START TEST thread_poller_perf 00:15:08.591 ************************************ 00:15:08.591 09:29:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:15:08.591 [2024-07-25 09:29:09.189774] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:08.591 [2024-07-25 09:29:09.189883] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65113 ] 00:15:08.851 [2024-07-25 09:29:09.354018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.110 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:15:09.110 [2024-07-25 09:29:09.587895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.528 ====================================== 00:15:10.528 busy:2293808000 (cyc) 00:15:10.528 total_run_count: 4376000 00:15:10.529 tsc_hz: 2290000000 (cyc) 00:15:10.529 ====================================== 00:15:10.529 poller_cost: 524 (cyc), 228 (nsec) 00:15:10.529 00:15:10.529 real 0m1.954s 00:15:10.529 user 0m1.729s 00:15:10.529 sys 0m0.116s 00:15:10.529 ************************************ 00:15:10.529 END TEST thread_poller_perf 00:15:10.529 ************************************ 00:15:10.529 09:29:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.529 09:29:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 09:29:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:15:10.788 ************************************ 00:15:10.788 END TEST thread 00:15:10.788 ************************************ 00:15:10.788 00:15:10.788 real 0m4.096s 00:15:10.788 user 0m3.473s 00:15:10.788 sys 0m0.414s 00:15:10.788 09:29:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:10.788 09:29:11 thread -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 09:29:11 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:15:10.788 09:29:11 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:10.788 09:29:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:10.788 09:29:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:10.788 09:29:11 -- common/autotest_common.sh@10 -- # set +x 00:15:10.788 ************************************ 00:15:10.788 START TEST app_cmdline 00:15:10.788 ************************************ 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:15:10.788 * Looking for test storage... 00:15:10.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:10.788 09:29:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:15:10.788 09:29:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=65194 00:15:10.788 09:29:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 65194 00:15:10.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 65194 ']' 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.788 09:29:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.788 09:29:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:11.047 [2024-07-25 09:29:11.428798] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:11.047 [2024-07-25 09:29:11.428961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65194 ] 00:15:11.047 [2024-07-25 09:29:11.603175] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.305 [2024-07-25 09:29:11.873291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.692 09:29:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.692 09:29:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:15:12.692 09:29:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:15:12.692 { 00:15:12.692 "version": "SPDK v24.09-pre git sha1 704257090", 00:15:12.692 "fields": { 00:15:12.692 "major": 24, 00:15:12.692 "minor": 9, 00:15:12.692 "patch": 0, 00:15:12.692 "suffix": "-pre", 00:15:12.692 "commit": "704257090" 00:15:12.692 } 00:15:12.692 } 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:15:12.692 09:29:13 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.692 09:29:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:15:12.692 09:29:13 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:15:12.692 09:29:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:12.693 09:29:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:15:12.950 request: 00:15:12.950 { 00:15:12.950 "method": "env_dpdk_get_mem_stats", 00:15:12.950 "req_id": 1 00:15:12.950 } 00:15:12.950 Got JSON-RPC error response 00:15:12.950 response: 00:15:12.950 { 00:15:12.950 "code": -32601, 00:15:12.950 "message": "Method not found" 00:15:12.950 } 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:12.950 09:29:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 65194 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 65194 ']' 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 65194 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65194 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65194' 00:15:12.950 killing process with pid 65194 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 65194 00:15:12.950 09:29:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 65194 00:15:16.274 00:15:16.274 real 0m4.958s 00:15:16.274 user 0m5.228s 00:15:16.274 sys 0m0.581s 00:15:16.274 ************************************ 00:15:16.274 END TEST app_cmdline 00:15:16.274 ************************************ 00:15:16.274 09:29:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.274 09:29:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:15:16.274 09:29:16 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:16.274 09:29:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:16.274 09:29:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.274 09:29:16 -- common/autotest_common.sh@10 -- # set +x 00:15:16.274 ************************************ 00:15:16.274 START TEST version 00:15:16.274 ************************************ 00:15:16.274 09:29:16 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:15:16.274 * Looking for test storage... 00:15:16.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:15:16.274 09:29:16 version -- app/version.sh@17 -- # get_header_version major 00:15:16.274 09:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # cut -f2 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:16.274 09:29:16 version -- app/version.sh@17 -- # major=24 00:15:16.274 09:29:16 version -- app/version.sh@18 -- # get_header_version minor 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # cut -f2 00:15:16.274 09:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:16.274 09:29:16 version -- app/version.sh@18 -- # minor=9 00:15:16.274 09:29:16 version -- app/version.sh@19 -- # get_header_version patch 00:15:16.274 09:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # cut -f2 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:16.274 09:29:16 version -- app/version.sh@19 -- # patch=0 00:15:16.274 09:29:16 version -- app/version.sh@20 -- # get_header_version suffix 00:15:16.274 09:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # cut -f2 00:15:16.274 09:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:15:16.274 09:29:16 version -- app/version.sh@20 -- # suffix=-pre 00:15:16.274 09:29:16 version -- app/version.sh@22 -- # version=24.9 00:15:16.275 09:29:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:15:16.275 09:29:16 version -- app/version.sh@28 -- # version=24.9rc0 00:15:16.275 09:29:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:16.275 09:29:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:15:16.275 09:29:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:15:16.275 09:29:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:15:16.275 00:15:16.275 real 0m0.206s 00:15:16.275 user 0m0.101s 00:15:16.275 sys 0m0.142s 00:15:16.275 09:29:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.275 09:29:16 version -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 ************************************ 00:15:16.275 END TEST version 00:15:16.275 ************************************ 00:15:16.275 09:29:16 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:15:16.275 09:29:16 -- spdk/autotest.sh@202 -- # uname -s 00:15:16.275 09:29:16 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:15:16.275 09:29:16 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:15:16.275 09:29:16 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:15:16.275 09:29:16 -- spdk/autotest.sh@215 -- # '[' 1 -eq 1 ']' 00:15:16.275 09:29:16 -- spdk/autotest.sh@216 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:15:16.275 09:29:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:16.275 09:29:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.275 09:29:16 -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 ************************************ 00:15:16.275 START TEST blockdev_nvme 00:15:16.275 ************************************ 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:15:16.275 * Looking for test storage... 00:15:16.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:16.275 09:29:16 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=65372 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:16.275 09:29:16 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 65372 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 65372 ']' 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.275 09:29:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.275 [2024-07-25 09:29:16.762017] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:16.275 [2024-07-25 09:29:16.762249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65372 ] 00:15:16.533 [2024-07-25 09:29:16.922609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.790 [2024-07-25 09:29:17.191161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.720 09:29:18 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.720 09:29:18 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:15:17.720 09:29:18 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:15:17.720 09:29:18 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:15:17.720 09:29:18 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:15:17.720 09:29:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:15:17.720 09:29:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:17.720 09:29:18 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:15:17.720 09:29:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.720 09:29:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:17.978 09:29:18 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:15:17.978 09:29:18 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:15:17.979 09:29:18 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a95deb8e-14ba-420e-b514-b0677d3cc8f2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a95deb8e-14ba-420e-b514-b0677d3cc8f2",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "51ce5e7b-89fe-4fe6-8c58-552049955e84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "51ce5e7b-89fe-4fe6-8c58-552049955e84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "c1b85bab-f39d-4263-ba0e-9aa666105a55"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c1b85bab-f39d-4263-ba0e-9aa666105a55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "212d55ad-62b5-4b06-8458-f74676aee5b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "212d55ad-62b5-4b06-8458-f74676aee5b4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b8dbeea7-cc5f-42a4-bff4-4e2547a2547b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b8dbeea7-cc5f-42a4-bff4-4e2547a2547b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9c621f28-7836-4282-9ec6-3ccdd9d2d88e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9c621f28-7836-4282-9ec6-3ccdd9d2d88e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:15:17.979 09:29:18 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:15:17.979 09:29:18 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:15:17.979 09:29:18 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:15:17.979 09:29:18 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 65372 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 65372 ']' 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 65372 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65372 00:15:17.979 killing process with pid 65372 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65372' 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 65372 00:15:17.979 09:29:18 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 65372 00:15:20.500 09:29:21 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:20.500 09:29:21 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:15:20.500 09:29:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:20.500 09:29:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:20.500 09:29:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:20.756 ************************************ 00:15:20.756 START TEST bdev_hello_world 00:15:20.756 ************************************ 00:15:20.756 09:29:21 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:15:20.756 [2024-07-25 09:29:21.216747] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:20.756 [2024-07-25 09:29:21.216929] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65473 ] 00:15:21.013 [2024-07-25 09:29:21.385093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.270 [2024-07-25 09:29:21.660572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.838 [2024-07-25 09:29:22.353037] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:21.838 [2024-07-25 09:29:22.353089] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:15:21.838 [2024-07-25 09:29:22.353108] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:21.838 [2024-07-25 09:29:22.355827] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:21.838 [2024-07-25 09:29:22.356314] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:21.838 [2024-07-25 09:29:22.356347] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:21.838 [2024-07-25 09:29:22.356528] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:21.838 00:15:21.838 [2024-07-25 09:29:22.356550] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:23.216 ************************************ 00:15:23.216 END TEST bdev_hello_world 00:15:23.216 ************************************ 00:15:23.216 00:15:23.216 real 0m2.572s 00:15:23.216 user 0m2.205s 00:15:23.216 sys 0m0.255s 00:15:23.216 09:29:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:23.216 09:29:23 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:23.216 09:29:23 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:15:23.216 09:29:23 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:23.216 09:29:23 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:23.216 09:29:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.216 ************************************ 00:15:23.216 START TEST bdev_bounds 00:15:23.216 ************************************ 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:15:23.216 Process bdevio pid: 65515 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=65515 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 65515' 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 65515 00:15:23.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 65515 ']' 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:23.216 09:29:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:23.216 [2024-07-25 09:29:23.812394] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:23.216 [2024-07-25 09:29:23.812514] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65515 ] 00:15:23.474 [2024-07-25 09:29:23.966451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:23.734 [2024-07-25 09:29:24.208541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.734 [2024-07-25 09:29:24.208588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.734 [2024-07-25 09:29:24.208614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.674 09:29:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:24.674 09:29:24 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:15:24.674 09:29:24 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:24.674 I/O targets: 00:15:24.674 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:24.674 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:24.674 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:24.674 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:24.674 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:24.674 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:24.674 00:15:24.674 00:15:24.674 CUnit - A unit testing framework for C - Version 2.1-3 00:15:24.674 http://cunit.sourceforge.net/ 00:15:24.674 00:15:24.674 00:15:24.674 Suite: bdevio tests on: Nvme3n1 00:15:24.674 Test: blockdev write read block ...passed 00:15:24.674 Test: blockdev write zeroes read block ...passed 00:15:24.674 Test: blockdev write zeroes read no split ...passed 00:15:24.674 Test: blockdev write zeroes read split ...passed 00:15:24.674 Test: blockdev write zeroes read split partial ...passed 00:15:24.674 Test: blockdev reset ...[2024-07-25 09:29:25.193618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:15:24.674 [2024-07-25 09:29:25.197046] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:24.674 passed 00:15:24.674 Test: blockdev write read 8 blocks ...passed 00:15:24.674 Test: blockdev write read size > 128k ...passed 00:15:24.674 Test: blockdev write read invalid size ...passed 00:15:24.674 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:24.674 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:24.674 Test: blockdev write read max offset ...passed 00:15:24.674 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:24.674 Test: blockdev writev readv 8 blocks ...passed 00:15:24.674 Test: blockdev writev readv 30 x 1block ...passed 00:15:24.674 Test: blockdev writev readv block ...passed 00:15:24.674 Test: blockdev writev readv size > 128k ...passed 00:15:24.674 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:24.674 Test: blockdev comparev and writev ...[2024-07-25 09:29:25.205477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26c80a000 len:0x1000 00:15:24.674 [2024-07-25 09:29:25.205600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:24.674 passed 00:15:24.674 Test: blockdev nvme passthru rw ...passed 00:15:24.674 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:29:25.206559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:24.674 [2024-07-25 09:29:25.206663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:24.674 passed 00:15:24.674 Test: blockdev nvme admin passthru ...passed 00:15:24.674 Test: blockdev copy ...passed 00:15:24.674 Suite: bdevio tests on: Nvme2n3 00:15:24.674 Test: blockdev write read block ...passed 00:15:24.933 Test: blockdev write zeroes read block ...passed 00:15:24.933 Test: blockdev write zeroes read no split ...passed 00:15:24.933 Test: blockdev write zeroes read split ...passed 00:15:24.933 Test: blockdev write zeroes read split partial ...passed 00:15:24.933 Test: blockdev reset ...[2024-07-25 09:29:25.418568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:15:24.933 [2024-07-25 09:29:25.423525] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:24.933 passed 00:15:24.933 Test: blockdev write read 8 blocks ...passed 00:15:24.933 Test: blockdev write read size > 128k ...passed 00:15:24.933 Test: blockdev write read invalid size ...passed 00:15:24.933 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:24.933 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:24.933 Test: blockdev write read max offset ...passed 00:15:24.933 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:24.933 Test: blockdev writev readv 8 blocks ...passed 00:15:24.933 Test: blockdev writev readv 30 x 1block ...passed 00:15:24.933 Test: blockdev writev readv block ...passed 00:15:24.933 Test: blockdev writev readv size > 128k ...passed 00:15:24.933 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:24.933 Test: blockdev comparev and writev ...[2024-07-25 09:29:25.432123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x24ee04000 len:0x1000 00:15:24.933 [2024-07-25 09:29:25.432345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:24.933 passed 00:15:24.933 Test: blockdev nvme passthru rw ...passed 00:15:24.933 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:29:25.433225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:24.933 [2024-07-25 09:29:25.433328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:24.933 passed 00:15:24.933 Test: blockdev nvme admin passthru ...passed 00:15:24.933 Test: blockdev copy ...passed 00:15:24.933 Suite: bdevio tests on: Nvme2n2 00:15:24.933 Test: blockdev write read block ...passed 00:15:24.933 Test: blockdev write zeroes read block ...passed 00:15:25.193 Test: blockdev write zeroes read no split ...passed 00:15:25.193 Test: blockdev write zeroes read split ...passed 00:15:25.193 Test: blockdev write zeroes read split partial ...passed 00:15:25.193 Test: blockdev reset ...[2024-07-25 09:29:25.634305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:15:25.193 [2024-07-25 09:29:25.639303] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:25.193 passed 00:15:25.193 Test: blockdev write read 8 blocks ...passed 00:15:25.193 Test: blockdev write read size > 128k ...passed 00:15:25.193 Test: blockdev write read invalid size ...passed 00:15:25.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:25.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:25.193 Test: blockdev write read max offset ...passed 00:15:25.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:25.193 Test: blockdev writev readv 8 blocks ...passed 00:15:25.193 Test: blockdev writev readv 30 x 1block ...passed 00:15:25.193 Test: blockdev writev readv block ...passed 00:15:25.193 Test: blockdev writev readv size > 128k ...passed 00:15:25.193 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:25.193 Test: blockdev comparev and writev ...[2024-07-25 09:29:25.646712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e83a000 len:0x1000 00:15:25.193 [2024-07-25 09:29:25.646767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:25.193 passed 00:15:25.193 Test: blockdev nvme passthru rw ...passed 00:15:25.193 Test: blockdev nvme passthru vendor specific ...passed 00:15:25.193 Test: blockdev nvme admin passthru ...[2024-07-25 09:29:25.647643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:25.193 [2024-07-25 09:29:25.647677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:25.193 passed 00:15:25.193 Test: blockdev copy ...passed 00:15:25.193 Suite: bdevio tests on: Nvme2n1 00:15:25.193 Test: blockdev write read block ...passed 00:15:25.193 Test: blockdev write zeroes read block ...passed 00:15:25.193 Test: blockdev write zeroes read no split ...passed 00:15:25.193 Test: blockdev write zeroes read split ...passed 00:15:25.193 Test: blockdev write zeroes read split partial ...passed 00:15:25.193 Test: blockdev reset ...[2024-07-25 09:29:25.796726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:15:25.193 [2024-07-25 09:29:25.801794] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:25.193 passed 00:15:25.193 Test: blockdev write read 8 blocks ...passed 00:15:25.193 Test: blockdev write read size > 128k ...passed 00:15:25.193 Test: blockdev write read invalid size ...passed 00:15:25.193 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:25.193 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:25.193 Test: blockdev write read max offset ...passed 00:15:25.193 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:25.193 Test: blockdev writev readv 8 blocks ...passed 00:15:25.453 Test: blockdev writev readv 30 x 1block ...passed 00:15:25.453 Test: blockdev writev readv block ...passed 00:15:25.453 Test: blockdev writev readv size > 128k ...passed 00:15:25.453 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:25.453 Test: blockdev comparev and writev ...[2024-07-25 09:29:25.810317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e834000 len:0x1000 00:15:25.453 [2024-07-25 09:29:25.810435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:25.453 passed 00:15:25.453 Test: blockdev nvme passthru rw ...passed 00:15:25.453 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:29:25.811532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:25.453 [2024-07-25 09:29:25.811628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:25.453 passed 00:15:25.453 Test: blockdev nvme admin passthru ...passed 00:15:25.453 Test: blockdev copy ...passed 00:15:25.453 Suite: bdevio tests on: Nvme1n1 00:15:25.453 Test: blockdev write read block ...passed 00:15:25.453 Test: blockdev write zeroes read block ...passed 00:15:25.453 Test: blockdev write zeroes read no split ...passed 00:15:25.453 Test: blockdev write zeroes read split ...passed 00:15:25.453 Test: blockdev write zeroes read split partial ...passed 00:15:25.453 Test: blockdev reset ...[2024-07-25 09:29:25.978357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:15:25.453 [2024-07-25 09:29:25.982912] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:25.453 passed 00:15:25.453 Test: blockdev write read 8 blocks ...passed 00:15:25.453 Test: blockdev write read size > 128k ...passed 00:15:25.453 Test: blockdev write read invalid size ...passed 00:15:25.453 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:25.453 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:25.453 Test: blockdev write read max offset ...passed 00:15:25.453 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:25.453 Test: blockdev writev readv 8 blocks ...passed 00:15:25.453 Test: blockdev writev readv 30 x 1block ...passed 00:15:25.453 Test: blockdev writev readv block ...passed 00:15:25.453 Test: blockdev writev readv size > 128k ...passed 00:15:25.453 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:25.453 Test: blockdev comparev and writev ...[2024-07-25 09:29:25.991634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e830000 len:0x1000 00:15:25.453 [2024-07-25 09:29:25.991783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:15:25.453 passed 00:15:25.453 Test: blockdev nvme passthru rw ...passed 00:15:25.453 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:29:25.992756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:15:25.453 [2024-07-25 09:29:25.992834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:15:25.453 passed 00:15:25.453 Test: blockdev nvme admin passthru ...passed 00:15:25.453 Test: blockdev copy ...passed 00:15:25.453 Suite: bdevio tests on: Nvme0n1 00:15:25.453 Test: blockdev write read block ...passed 00:15:25.713 Test: blockdev write zeroes read block ...passed 00:15:25.713 Test: blockdev write zeroes read no split ...passed 00:15:25.713 Test: blockdev write zeroes read split ...passed 00:15:25.713 Test: blockdev write zeroes read split partial ...passed 00:15:25.713 Test: blockdev reset ...[2024-07-25 09:29:26.243873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:15:25.713 [2024-07-25 09:29:26.248418] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:25.713 passed 00:15:25.713 Test: blockdev write read 8 blocks ...passed 00:15:25.713 Test: blockdev write read size > 128k ...passed 00:15:25.713 Test: blockdev write read invalid size ...passed 00:15:25.713 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:25.713 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:25.713 Test: blockdev write read max offset ...passed 00:15:25.713 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:25.713 Test: blockdev writev readv 8 blocks ...passed 00:15:25.713 Test: blockdev writev readv 30 x 1block ...passed 00:15:25.713 Test: blockdev writev readv block ...passed 00:15:25.713 Test: blockdev writev readv size > 128k ...passed 00:15:25.713 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:25.713 Test: blockdev comparev and writev ...[2024-07-25 09:29:26.255817] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:15:25.713 separate metadata which is not supported yet. 00:15:25.713 passed 00:15:25.713 Test: blockdev nvme passthru rw ...passed 00:15:25.713 Test: blockdev nvme passthru vendor specific ...passed 00:15:25.713 Test: blockdev nvme admin passthru ...[2024-07-25 09:29:26.256441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:15:25.713 [2024-07-25 09:29:26.256491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:15:25.713 passed 00:15:25.713 Test: blockdev copy ...passed 00:15:25.713 00:15:25.713 Run Summary: Type Total Ran Passed Failed Inactive 00:15:25.713 suites 6 6 n/a 0 0 00:15:25.713 tests 138 138 138 0 0 00:15:25.713 asserts 893 893 893 0 n/a 00:15:25.713 00:15:25.713 Elapsed time = 3.015 seconds 00:15:25.713 0 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 65515 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 65515 ']' 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 65515 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65515 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65515' 00:15:25.713 killing process with pid 65515 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 65515 00:15:25.713 09:29:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 65515 00:15:27.658 09:29:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:27.658 00:15:27.658 real 0m4.279s 00:15:27.658 user 0m10.605s 00:15:27.658 sys 0m0.403s 00:15:27.658 09:29:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:27.658 09:29:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:27.658 ************************************ 00:15:27.658 END TEST bdev_bounds 00:15:27.658 ************************************ 00:15:27.658 09:29:28 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:15:27.658 09:29:28 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:27.658 09:29:28 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:27.658 09:29:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.658 ************************************ 00:15:27.658 START TEST bdev_nbd 00:15:27.658 ************************************ 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=65597 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 65597 /var/tmp/spdk-nbd.sock 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 65597 ']' 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:27.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.658 09:29:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:27.658 [2024-07-25 09:29:28.129209] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:27.658 [2024-07-25 09:29:28.129456] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.915 [2024-07-25 09:29:28.293035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.172 [2024-07-25 09:29:28.565905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:28.742 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.000 1+0 records in 00:15:29.000 1+0 records out 00:15:29.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072861 s, 5.6 MB/s 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.000 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.258 1+0 records in 00:15:29.258 1+0 records out 00:15:29.258 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500697 s, 8.2 MB/s 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.258 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.518 1+0 records in 00:15:29.518 1+0 records out 00:15:29.518 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804538 s, 5.1 MB/s 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.518 09:29:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:29.777 1+0 records in 00:15:29.777 1+0 records out 00:15:29.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787061 s, 5.2 MB/s 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:29.777 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.036 1+0 records in 00:15:30.036 1+0 records out 00:15:30.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098695 s, 4.2 MB/s 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:30.036 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:30.295 1+0 records in 00:15:30.295 1+0 records out 00:15:30.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00084166 s, 4.9 MB/s 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd0", 00:15:30.295 "bdev_name": "Nvme0n1" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd1", 00:15:30.295 "bdev_name": "Nvme1n1" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd2", 00:15:30.295 "bdev_name": "Nvme2n1" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd3", 00:15:30.295 "bdev_name": "Nvme2n2" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd4", 00:15:30.295 "bdev_name": "Nvme2n3" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd5", 00:15:30.295 "bdev_name": "Nvme3n1" 00:15:30.295 } 00:15:30.295 ]' 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd0", 00:15:30.295 "bdev_name": "Nvme0n1" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd1", 00:15:30.295 "bdev_name": "Nvme1n1" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd2", 00:15:30.295 "bdev_name": "Nvme2n1" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd3", 00:15:30.295 "bdev_name": "Nvme2n2" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd4", 00:15:30.295 "bdev_name": "Nvme2n3" 00:15:30.295 }, 00:15:30.295 { 00:15:30.295 "nbd_device": "/dev/nbd5", 00:15:30.295 "bdev_name": "Nvme3n1" 00:15:30.295 } 00:15:30.295 ]' 00:15:30.295 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.553 09:29:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.553 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:30.812 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.071 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:31.330 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:31.330 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:31.330 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:31.330 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.330 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:31.331 09:29:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.591 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:31.851 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:15:32.111 /dev/nbd0 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.111 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.111 1+0 records in 00:15:32.111 1+0 records out 00:15:32.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108796 s, 3.8 MB/s 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.112 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:15:32.412 /dev/nbd1 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.412 1+0 records in 00:15:32.412 1+0 records out 00:15:32.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072329 s, 5.7 MB/s 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.412 09:29:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:15:32.671 /dev/nbd10 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.671 1+0 records in 00:15:32.671 1+0 records out 00:15:32.671 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000914283 s, 4.5 MB/s 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.671 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:32.672 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.672 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.672 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:15:32.929 /dev/nbd11 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.929 1+0 records in 00:15:32.929 1+0 records out 00:15:32.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849929 s, 4.8 MB/s 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:32.929 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:15:33.187 /dev/nbd12 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:33.187 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.188 1+0 records in 00:15:33.188 1+0 records out 00:15:33.188 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467798 s, 8.8 MB/s 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.188 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:15:33.447 /dev/nbd13 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.447 1+0 records in 00:15:33.447 1+0 records out 00:15:33.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000845153 s, 4.8 MB/s 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.447 09:29:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd0", 00:15:33.707 "bdev_name": "Nvme0n1" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd1", 00:15:33.707 "bdev_name": "Nvme1n1" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd10", 00:15:33.707 "bdev_name": "Nvme2n1" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd11", 00:15:33.707 "bdev_name": "Nvme2n2" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd12", 00:15:33.707 "bdev_name": "Nvme2n3" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd13", 00:15:33.707 "bdev_name": "Nvme3n1" 00:15:33.707 } 00:15:33.707 ]' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd0", 00:15:33.707 "bdev_name": "Nvme0n1" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd1", 00:15:33.707 "bdev_name": "Nvme1n1" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd10", 00:15:33.707 "bdev_name": "Nvme2n1" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd11", 00:15:33.707 "bdev_name": "Nvme2n2" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd12", 00:15:33.707 "bdev_name": "Nvme2n3" 00:15:33.707 }, 00:15:33.707 { 00:15:33.707 "nbd_device": "/dev/nbd13", 00:15:33.707 "bdev_name": "Nvme3n1" 00:15:33.707 } 00:15:33.707 ]' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:33.707 /dev/nbd1 00:15:33.707 /dev/nbd10 00:15:33.707 /dev/nbd11 00:15:33.707 /dev/nbd12 00:15:33.707 /dev/nbd13' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:33.707 /dev/nbd1 00:15:33.707 /dev/nbd10 00:15:33.707 /dev/nbd11 00:15:33.707 /dev/nbd12 00:15:33.707 /dev/nbd13' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:33.707 256+0 records in 00:15:33.707 256+0 records out 00:15:33.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124951 s, 83.9 MB/s 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:33.707 256+0 records in 00:15:33.707 256+0 records out 00:15:33.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0911339 s, 11.5 MB/s 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.707 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:33.967 256+0 records in 00:15:33.967 256+0 records out 00:15:33.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101755 s, 10.3 MB/s 00:15:33.967 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.967 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:33.967 256+0 records in 00:15:33.967 256+0 records out 00:15:33.967 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103146 s, 10.2 MB/s 00:15:33.967 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:33.967 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:34.225 256+0 records in 00:15:34.225 256+0 records out 00:15:34.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.097329 s, 10.8 MB/s 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:34.225 256+0 records in 00:15:34.225 256+0 records out 00:15:34.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100138 s, 10.5 MB/s 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:34.225 256+0 records in 00:15:34.225 256+0 records out 00:15:34.225 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10909 s, 9.6 MB/s 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.225 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.483 09:29:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.742 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.005 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:35.006 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.006 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.006 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.006 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.265 09:29:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.523 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.781 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:15:36.038 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:36.298 malloc_lvol_verify 00:15:36.298 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:36.558 27f766e8-386b-4bf1-8eb2-4359bcc8a8be 00:15:36.558 09:29:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:36.558 cf11db80-970a-494d-a0e4-469d331e40f8 00:15:36.558 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:36.816 /dev/nbd0 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:15:36.816 mke2fs 1.46.5 (30-Dec-2021) 00:15:36.816 Discarding device blocks: 0/4096 done 00:15:36.816 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:36.816 00:15:36.816 Allocating group tables: 0/1 done 00:15:36.816 Writing inode tables: 0/1 done 00:15:36.816 Creating journal (1024 blocks): done 00:15:36.816 Writing superblocks and filesystem accounting information: 0/1 done 00:15:36.816 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.816 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 65597 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 65597 ']' 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 65597 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65597 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65597' 00:15:37.075 killing process with pid 65597 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 65597 00:15:37.075 09:29:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 65597 00:15:38.475 09:29:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:38.475 ************************************ 00:15:38.475 END TEST bdev_nbd 00:15:38.475 ************************************ 00:15:38.475 00:15:38.475 real 0m10.918s 00:15:38.475 user 0m14.763s 00:15:38.475 sys 0m3.687s 00:15:38.475 09:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.475 09:29:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 skipping fio tests on NVMe due to multi-ns failures. 00:15:38.475 09:29:39 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:15:38.475 09:29:39 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:15:38.475 09:29:39 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:15:38.475 09:29:39 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:38.475 09:29:39 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:38.475 09:29:39 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:38.475 09:29:39 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.475 09:29:39 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:38.475 ************************************ 00:15:38.475 START TEST bdev_verify 00:15:38.475 ************************************ 00:15:38.475 09:29:39 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:38.734 [2024-07-25 09:29:39.178434] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:38.734 [2024-07-25 09:29:39.178634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65981 ] 00:15:38.734 [2024-07-25 09:29:39.343051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.992 [2024-07-25 09:29:39.578621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.992 [2024-07-25 09:29:39.578657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.924 Running I/O for 5 seconds... 00:15:45.191 00:15:45.191 Latency(us) 00:15:45.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x0 length 0xbd0bd 00:15:45.191 Nvme0n1 : 5.07 1679.45 6.56 0.00 0.00 75820.89 13507.86 73262.95 00:15:45.191 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:45.191 Nvme0n1 : 5.07 1654.86 6.46 0.00 0.00 76982.19 12935.49 80589.25 00:15:45.191 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x0 length 0xa0000 00:15:45.191 Nvme1n1 : 5.08 1686.61 6.59 0.00 0.00 75599.23 11447.34 67310.34 00:15:45.191 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0xa0000 length 0xa0000 00:15:45.191 Nvme1n1 : 5.07 1654.21 6.46 0.00 0.00 76916.94 12763.78 77841.89 00:15:45.191 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x0 length 0x80000 00:15:45.191 Nvme2n1 : 5.09 1685.65 6.58 0.00 0.00 75474.09 12134.18 68684.02 00:15:45.191 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x80000 length 0x80000 00:15:45.191 Nvme2n1 : 5.08 1661.95 6.49 0.00 0.00 76557.83 9844.71 75552.42 00:15:45.191 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x0 length 0x80000 00:15:45.191 Nvme2n2 : 5.09 1684.72 6.58 0.00 0.00 75375.78 12935.49 69141.91 00:15:45.191 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x80000 length 0x80000 00:15:45.191 Nvme2n2 : 5.08 1661.49 6.49 0.00 0.00 76411.67 10188.13 75552.42 00:15:45.191 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x0 length 0x80000 00:15:45.191 Nvme2n3 : 5.09 1683.92 6.58 0.00 0.00 75281.46 13965.75 70057.70 00:15:45.191 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x80000 length 0x80000 00:15:45.191 Nvme2n3 : 5.09 1660.55 6.49 0.00 0.00 76322.56 10989.44 79215.57 00:15:45.191 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x0 length 0x20000 00:15:45.191 Nvme3n1 : 5.09 1683.55 6.58 0.00 0.00 75182.73 13622.33 71431.38 00:15:45.191 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:45.191 Verification LBA range: start 0x20000 length 0x20000 00:15:45.191 Nvme3n1 : 5.09 1659.63 6.48 0.00 0.00 76239.90 12019.70 77383.99 00:15:45.191 =================================================================================================================== 00:15:45.191 Total : 20056.58 78.35 0.00 0.00 76008.52 9844.71 80589.25 00:15:47.098 00:15:47.098 real 0m8.170s 00:15:47.098 user 0m14.811s 00:15:47.098 sys 0m0.305s 00:15:47.099 ************************************ 00:15:47.099 END TEST bdev_verify 00:15:47.099 ************************************ 00:15:47.099 09:29:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.099 09:29:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:47.099 09:29:47 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:47.099 09:29:47 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:15:47.099 09:29:47 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.099 09:29:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.099 ************************************ 00:15:47.099 START TEST bdev_verify_big_io 00:15:47.099 ************************************ 00:15:47.099 09:29:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:47.099 [2024-07-25 09:29:47.355081] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:47.099 [2024-07-25 09:29:47.355198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66085 ] 00:15:47.099 [2024-07-25 09:29:47.507468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.358 [2024-07-25 09:29:47.790124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.358 [2024-07-25 09:29:47.790161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.293 Running I/O for 5 seconds... 00:15:54.864 00:15:54.864 Latency(us) 00:15:54.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.864 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x0 length 0xbd0b 00:15:54.864 Nvme0n1 : 5.72 155.02 9.69 0.00 0.00 811991.86 30907.81 794903.03 00:15:54.864 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:54.864 Nvme0n1 : 5.73 156.32 9.77 0.00 0.00 798901.99 21864.41 824208.21 00:15:54.864 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x0 length 0xa000 00:15:54.864 Nvme1n1 : 5.72 152.19 9.51 0.00 0.00 804970.76 33655.17 765597.85 00:15:54.864 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0xa000 length 0xa000 00:15:54.864 Nvme1n1 : 5.71 156.83 9.80 0.00 0.00 768665.48 72805.06 747282.11 00:15:54.864 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x0 length 0x8000 00:15:54.864 Nvme2n1 : 5.72 152.50 9.53 0.00 0.00 784844.86 33197.28 765597.85 00:15:54.864 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x8000 length 0x8000 00:15:54.864 Nvme2n1 : 5.74 160.88 10.05 0.00 0.00 740526.63 16255.22 875492.28 00:15:54.864 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x0 length 0x8000 00:15:54.864 Nvme2n2 : 5.73 157.04 9.81 0.00 0.00 746882.53 9673.00 1018355.03 00:15:54.864 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x8000 length 0x8000 00:15:54.864 Nvme2n2 : 5.75 160.74 10.05 0.00 0.00 726093.48 13278.91 1289427.95 00:15:54.864 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x0 length 0x8000 00:15:54.864 Nvme2n3 : 5.74 156.86 9.80 0.00 0.00 728279.37 9901.95 827871.36 00:15:54.864 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x8000 length 0x8000 00:15:54.864 Nvme2n3 : 5.75 160.35 10.02 0.00 0.00 709909.60 14080.22 1304080.54 00:15:54.864 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x0 length 0x2000 00:15:54.864 Nvme3n1 : 5.74 159.49 9.97 0.00 0.00 699086.76 12821.02 827871.36 00:15:54.864 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:54.864 Verification LBA range: start 0x2000 length 0x2000 00:15:54.864 Nvme3n1 : 5.75 163.76 10.23 0.00 0.00 678498.08 6038.47 1333385.73 00:15:54.865 =================================================================================================================== 00:15:54.865 Total : 1891.98 118.25 0.00 0.00 749033.13 6038.47 1333385.73 00:15:57.395 00:15:57.395 real 0m10.296s 00:15:57.395 user 0m19.058s 00:15:57.395 sys 0m0.346s 00:15:57.395 09:29:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:57.395 ************************************ 00:15:57.395 END TEST bdev_verify_big_io 00:15:57.395 ************************************ 00:15:57.395 09:29:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:57.395 09:29:57 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:57.396 09:29:57 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:15:57.396 09:29:57 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:57.396 09:29:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.396 ************************************ 00:15:57.396 START TEST bdev_write_zeroes 00:15:57.396 ************************************ 00:15:57.396 09:29:57 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:57.396 [2024-07-25 09:29:57.708710] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:57.396 [2024-07-25 09:29:57.708827] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66216 ] 00:15:57.396 [2024-07-25 09:29:57.862383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.655 [2024-07-25 09:29:58.087915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.223 Running I/O for 1 seconds... 00:15:59.595 00:15:59.595 Latency(us) 00:15:59.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.595 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:59.595 Nvme0n1 : 1.01 10625.69 41.51 0.00 0.00 12008.16 9329.58 29992.02 00:15:59.595 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:59.595 Nvme1n1 : 1.01 10612.85 41.46 0.00 0.00 12004.95 9787.47 30220.97 00:15:59.595 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:59.595 Nvme2n1 : 1.02 10653.21 41.61 0.00 0.00 11880.04 6439.13 24268.35 00:15:59.595 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:59.595 Nvme2n2 : 1.02 10640.56 41.56 0.00 0.00 11852.59 6753.93 22093.36 00:15:59.595 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:59.595 Nvme2n3 : 1.02 10628.16 41.52 0.00 0.00 11830.11 7011.49 20605.21 00:15:59.595 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:59.595 Nvme3n1 : 1.03 10655.13 41.62 0.00 0.00 11787.14 6954.26 19231.52 00:15:59.595 =================================================================================================================== 00:15:59.595 Total : 63815.60 249.28 0.00 0.00 11893.28 6439.13 30220.97 00:16:00.531 00:16:00.531 real 0m3.415s 00:16:00.531 user 0m3.063s 00:16:00.531 sys 0m0.238s 00:16:00.531 09:30:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.531 09:30:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 ************************************ 00:16:00.531 END TEST bdev_write_zeroes 00:16:00.531 ************************************ 00:16:00.531 09:30:01 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:00.531 09:30:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:00.531 09:30:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.531 09:30:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:00.531 ************************************ 00:16:00.531 START TEST bdev_json_nonenclosed 00:16:00.531 ************************************ 00:16:00.531 09:30:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:00.794 [2024-07-25 09:30:01.188558] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:00.794 [2024-07-25 09:30:01.188685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66275 ] 00:16:00.794 [2024-07-25 09:30:01.349764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.070 [2024-07-25 09:30:01.585166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.070 [2024-07-25 09:30:01.585281] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:01.070 [2024-07-25 09:30:01.585303] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:01.070 [2024-07-25 09:30:01.585315] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:01.640 00:16:01.640 real 0m0.942s 00:16:01.640 user 0m0.709s 00:16:01.640 sys 0m0.128s 00:16:01.640 09:30:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:01.640 09:30:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:01.640 ************************************ 00:16:01.640 END TEST bdev_json_nonenclosed 00:16:01.640 ************************************ 00:16:01.640 09:30:02 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.640 09:30:02 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:01.640 09:30:02 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:01.640 09:30:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.640 ************************************ 00:16:01.640 START TEST bdev_json_nonarray 00:16:01.640 ************************************ 00:16:01.640 09:30:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:01.640 [2024-07-25 09:30:02.192395] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:01.640 [2024-07-25 09:30:02.192519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66305 ] 00:16:01.900 [2024-07-25 09:30:02.359923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.158 [2024-07-25 09:30:02.592046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.158 [2024-07-25 09:30:02.592162] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:02.158 [2024-07-25 09:30:02.592182] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:02.158 [2024-07-25 09:30:02.592194] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:02.726 00:16:02.726 real 0m0.937s 00:16:02.726 user 0m0.688s 00:16:02.726 sys 0m0.142s 00:16:02.726 09:30:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.726 09:30:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:02.726 ************************************ 00:16:02.726 END TEST bdev_json_nonarray 00:16:02.726 ************************************ 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:16:02.726 09:30:03 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:16:02.727 00:16:02.727 real 0m46.594s 00:16:02.727 user 1m10.536s 00:16:02.727 sys 0m6.444s 00:16:02.727 09:30:03 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:02.727 09:30:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:16:02.727 ************************************ 00:16:02.727 END TEST blockdev_nvme 00:16:02.727 ************************************ 00:16:02.727 09:30:03 -- spdk/autotest.sh@217 -- # uname -s 00:16:02.727 09:30:03 -- spdk/autotest.sh@217 -- # [[ Linux == Linux ]] 00:16:02.727 09:30:03 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:16:02.727 09:30:03 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:02.727 09:30:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:02.727 09:30:03 -- common/autotest_common.sh@10 -- # set +x 00:16:02.727 ************************************ 00:16:02.727 START TEST blockdev_nvme_gpt 00:16:02.727 ************************************ 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:16:02.727 * Looking for test storage... 00:16:02.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66382 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:02.727 09:30:03 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 66382 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 66382 ']' 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:02.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:02.727 09:30:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:02.985 [2024-07-25 09:30:03.412239] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:02.985 [2024-07-25 09:30:03.412385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66382 ] 00:16:02.985 [2024-07-25 09:30:03.573313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.244 [2024-07-25 09:30:03.802654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.182 09:30:04 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.182 09:30:04 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:16:04.182 09:30:04 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:16:04.182 09:30:04 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:16:04.182 09:30:04 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:04.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:05.011 Waiting for block devices as requested 00:16:05.011 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:05.011 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:05.270 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:16:05.270 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:16:10.553 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # local nvme bdf 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:16:10.553 BYT; 00:16:10.553 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:16:10.553 BYT; 00:16:10.553 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:16:10.553 09:30:10 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:16:10.553 09:30:10 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:16:11.492 The operation has completed successfully. 00:16:11.492 09:30:11 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:16:12.429 The operation has completed successfully. 00:16:12.429 09:30:12 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:12.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:13.935 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.935 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.935 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.935 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:13.935 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:16:13.935 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.935 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:13.935 [] 00:16:13.935 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.935 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:16:13.935 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:16:13.935 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:16:13.935 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:14.194 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:16:14.194 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.194 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:16:14.454 09:30:14 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.454 09:30:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:14.454 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.454 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:16:14.454 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:16:14.455 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "6bccc255-cc06-4bfb-81d5-012d6689bbd4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "6bccc255-cc06-4bfb-81d5-012d6689bbd4",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e9bcae90-4a76-46ee-b98c-1afe97b672b2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9bcae90-4a76-46ee-b98c-1afe97b672b2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5bba239e-f889-43df-acf0-ca2bbbdedb30"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5bba239e-f889-43df-acf0-ca2bbbdedb30",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3dc9ac46-1c52-452e-a9ab-449e67132987"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3dc9ac46-1c52-452e-a9ab-449e67132987",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e2cc91de-cdb3-445f-b80d-7e6b8b40cffc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e2cc91de-cdb3-445f-b80d-7e6b8b40cffc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:16:14.455 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:16:14.455 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:16:14.455 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:16:14.455 09:30:15 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 66382 00:16:14.455 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 66382 ']' 00:16:14.455 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 66382 00:16:14.455 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66382 00:16:14.715 killing process with pid 66382 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66382' 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 66382 00:16:14.715 09:30:15 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 66382 00:16:17.252 09:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:17.252 09:30:17 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:16:17.252 09:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:17.252 09:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:17.252 09:30:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:17.252 ************************************ 00:16:17.252 START TEST bdev_hello_world 00:16:17.252 ************************************ 00:16:17.252 09:30:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:16:17.252 [2024-07-25 09:30:17.587359] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:17.252 [2024-07-25 09:30:17.587968] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67025 ] 00:16:17.252 [2024-07-25 09:30:17.750851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.511 [2024-07-25 09:30:17.974067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.079 [2024-07-25 09:30:18.633587] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:18.079 [2024-07-25 09:30:18.633648] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:16:18.079 [2024-07-25 09:30:18.633684] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:18.079 [2024-07-25 09:30:18.636485] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:18.079 [2024-07-25 09:30:18.636995] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:18.079 [2024-07-25 09:30:18.637029] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:18.079 [2024-07-25 09:30:18.637257] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:18.079 00:16:18.079 [2024-07-25 09:30:18.637279] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:19.457 00:16:19.457 real 0m2.407s 00:16:19.457 user 0m2.078s 00:16:19.457 sys 0m0.223s 00:16:19.457 ************************************ 00:16:19.457 END TEST bdev_hello_world 00:16:19.457 ************************************ 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:19.457 09:30:19 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:16:19.457 09:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.457 09:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.457 09:30:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:19.457 ************************************ 00:16:19.457 START TEST bdev_bounds 00:16:19.457 ************************************ 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=67067 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 67067' 00:16:19.457 Process bdevio pid: 67067 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 67067 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 67067 ']' 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.457 09:30:19 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:19.457 [2024-07-25 09:30:20.055505] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:19.457 [2024-07-25 09:30:20.055695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67067 ] 00:16:19.715 [2024-07-25 09:30:20.213120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:19.974 [2024-07-25 09:30:20.432652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.974 [2024-07-25 09:30:20.432791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.974 [2024-07-25 09:30:20.432841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.542 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.542 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:16:20.542 09:30:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:20.801 I/O targets: 00:16:20.801 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:16:20.801 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:16:20.801 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:16:20.802 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:20.802 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:20.802 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:16:20.802 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:16:20.802 00:16:20.802 00:16:20.802 CUnit - A unit testing framework for C - Version 2.1-3 00:16:20.802 http://cunit.sourceforge.net/ 00:16:20.802 00:16:20.802 00:16:20.802 Suite: bdevio tests on: Nvme3n1 00:16:20.802 Test: blockdev write read block ...passed 00:16:20.802 Test: blockdev write zeroes read block ...passed 00:16:20.802 Test: blockdev write zeroes read no split ...passed 00:16:20.802 Test: blockdev write zeroes read split ...passed 00:16:20.802 Test: blockdev write zeroes read split partial ...passed 00:16:20.802 Test: blockdev reset ...[2024-07-25 09:30:21.331065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:16:20.802 [2024-07-25 09:30:21.335659] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:20.802 passed 00:16:20.802 Test: blockdev write read 8 blocks ...passed 00:16:20.802 Test: blockdev write read size > 128k ...passed 00:16:20.802 Test: blockdev write read invalid size ...passed 00:16:20.802 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:20.802 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:20.802 Test: blockdev write read max offset ...passed 00:16:20.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:20.802 Test: blockdev writev readv 8 blocks ...passed 00:16:20.802 Test: blockdev writev readv 30 x 1block ...passed 00:16:20.802 Test: blockdev writev readv block ...passed 00:16:20.802 Test: blockdev writev readv size > 128k ...passed 00:16:20.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:20.802 Test: blockdev comparev and writev ...[2024-07-25 09:30:21.344424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x268006000 len:0x1000 00:16:20.802 [2024-07-25 09:30:21.344506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:16:20.802 passed 00:16:20.802 Test: blockdev nvme passthru rw ...passed 00:16:20.802 Test: blockdev nvme passthru vendor specific ...passed 00:16:20.802 Test: blockdev nvme admin passthru ...[2024-07-25 09:30:21.345379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:16:20.802 [2024-07-25 09:30:21.345426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:16:20.802 passed 00:16:20.802 Test: blockdev copy ...passed 00:16:20.802 Suite: bdevio tests on: Nvme2n3 00:16:20.802 Test: blockdev write read block ...passed 00:16:20.802 Test: blockdev write zeroes read block ...passed 00:16:20.802 Test: blockdev write zeroes read no split ...passed 00:16:20.802 Test: blockdev write zeroes read split ...passed 00:16:21.062 Test: blockdev write zeroes read split partial ...passed 00:16:21.062 Test: blockdev reset ...[2024-07-25 09:30:21.424506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:16:21.062 [2024-07-25 09:30:21.429426] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.062 passed 00:16:21.062 Test: blockdev write read 8 blocks ...passed 00:16:21.062 Test: blockdev write read size > 128k ...passed 00:16:21.062 Test: blockdev write read invalid size ...passed 00:16:21.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.062 Test: blockdev write read max offset ...passed 00:16:21.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.062 Test: blockdev writev readv 8 blocks ...passed 00:16:21.062 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.062 Test: blockdev writev readv block ...passed 00:16:21.062 Test: blockdev writev readv size > 128k ...passed 00:16:21.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.062 Test: blockdev comparev and writev ...[2024-07-25 09:30:21.437907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27bc3c000 len:0x1000 00:16:21.062 [2024-07-25 09:30:21.437985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:16:21.062 passed 00:16:21.062 Test: blockdev nvme passthru rw ...passed 00:16:21.062 Test: blockdev nvme passthru vendor specific ...passed 00:16:21.062 Test: blockdev nvme admin passthru ...[2024-07-25 09:30:21.438811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:16:21.062 [2024-07-25 09:30:21.438857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:16:21.062 passed 00:16:21.062 Test: blockdev copy ...passed 00:16:21.062 Suite: bdevio tests on: Nvme2n2 00:16:21.062 Test: blockdev write read block ...passed 00:16:21.062 Test: blockdev write zeroes read block ...passed 00:16:21.062 Test: blockdev write zeroes read no split ...passed 00:16:21.062 Test: blockdev write zeroes read split ...passed 00:16:21.062 Test: blockdev write zeroes read split partial ...passed 00:16:21.062 Test: blockdev reset ...[2024-07-25 09:30:21.521318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:16:21.062 [2024-07-25 09:30:21.526450] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.062 passed 00:16:21.062 Test: blockdev write read 8 blocks ...passed 00:16:21.062 Test: blockdev write read size > 128k ...passed 00:16:21.062 Test: blockdev write read invalid size ...passed 00:16:21.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.062 Test: blockdev write read max offset ...passed 00:16:21.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.062 Test: blockdev writev readv 8 blocks ...passed 00:16:21.062 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.062 Test: blockdev writev readv block ...passed 00:16:21.062 Test: blockdev writev readv size > 128k ...passed 00:16:21.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.062 Test: blockdev comparev and writev ...[2024-07-25 09:30:21.535368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27bc36000 len:0x1000 00:16:21.062 [2024-07-25 09:30:21.535528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:16:21.062 passed 00:16:21.062 Test: blockdev nvme passthru rw ...passed 00:16:21.062 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:30:21.536455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:16:21.062 [2024-07-25 09:30:21.536552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:16:21.062 passed 00:16:21.062 Test: blockdev nvme admin passthru ...passed 00:16:21.062 Test: blockdev copy ...passed 00:16:21.062 Suite: bdevio tests on: Nvme2n1 00:16:21.062 Test: blockdev write read block ...passed 00:16:21.062 Test: blockdev write zeroes read block ...passed 00:16:21.062 Test: blockdev write zeroes read no split ...passed 00:16:21.062 Test: blockdev write zeroes read split ...passed 00:16:21.062 Test: blockdev write zeroes read split partial ...passed 00:16:21.062 Test: blockdev reset ...[2024-07-25 09:30:21.613333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:16:21.062 [2024-07-25 09:30:21.618196] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.062 passed 00:16:21.062 Test: blockdev write read 8 blocks ...passed 00:16:21.062 Test: blockdev write read size > 128k ...passed 00:16:21.062 Test: blockdev write read invalid size ...passed 00:16:21.062 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.062 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.062 Test: blockdev write read max offset ...passed 00:16:21.062 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.062 Test: blockdev writev readv 8 blocks ...passed 00:16:21.062 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.062 Test: blockdev writev readv block ...passed 00:16:21.062 Test: blockdev writev readv size > 128k ...passed 00:16:21.062 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.062 Test: blockdev comparev and writev ...[2024-07-25 09:30:21.626841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27bc32000 len:0x1000 00:16:21.062 [2024-07-25 09:30:21.627011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:16:21.062 passed 00:16:21.062 Test: blockdev nvme passthru rw ...passed 00:16:21.062 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:30:21.627906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:16:21.062 [2024-07-25 09:30:21.628024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:16:21.062 passed 00:16:21.062 Test: blockdev nvme admin passthru ...passed 00:16:21.062 Test: blockdev copy ...passed 00:16:21.062 Suite: bdevio tests on: Nvme1n1p2 00:16:21.062 Test: blockdev write read block ...passed 00:16:21.062 Test: blockdev write zeroes read block ...passed 00:16:21.062 Test: blockdev write zeroes read no split ...passed 00:16:21.322 Test: blockdev write zeroes read split ...passed 00:16:21.322 Test: blockdev write zeroes read split partial ...passed 00:16:21.322 Test: blockdev reset ...[2024-07-25 09:30:21.707832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:16:21.322 [2024-07-25 09:30:21.712383] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.322 passed 00:16:21.322 Test: blockdev write read 8 blocks ...passed 00:16:21.322 Test: blockdev write read size > 128k ...passed 00:16:21.322 Test: blockdev write read invalid size ...passed 00:16:21.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.322 Test: blockdev write read max offset ...passed 00:16:21.322 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.322 Test: blockdev writev readv 8 blocks ...passed 00:16:21.322 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.322 Test: blockdev writev readv block ...passed 00:16:21.322 Test: blockdev writev readv size > 128k ...passed 00:16:21.322 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.322 Test: blockdev comparev and writev ...[2024-07-25 09:30:21.721306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x27bc2e000 len:0x1000 00:16:21.322 [2024-07-25 09:30:21.721431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:16:21.322 passed 00:16:21.322 Test: blockdev nvme passthru rw ...passed 00:16:21.322 Test: blockdev nvme passthru vendor specific ...passed 00:16:21.322 Test: blockdev nvme admin passthru ...passed 00:16:21.322 Test: blockdev copy ...passed 00:16:21.322 Suite: bdevio tests on: Nvme1n1p1 00:16:21.322 Test: blockdev write read block ...passed 00:16:21.322 Test: blockdev write zeroes read block ...passed 00:16:21.322 Test: blockdev write zeroes read no split ...passed 00:16:21.322 Test: blockdev write zeroes read split ...passed 00:16:21.322 Test: blockdev write zeroes read split partial ...passed 00:16:21.322 Test: blockdev reset ...[2024-07-25 09:30:21.791380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:16:21.322 [2024-07-25 09:30:21.795616] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.322 passed 00:16:21.322 Test: blockdev write read 8 blocks ...passed 00:16:21.322 Test: blockdev write read size > 128k ...passed 00:16:21.322 Test: blockdev write read invalid size ...passed 00:16:21.322 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.322 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.322 Test: blockdev write read max offset ...passed 00:16:21.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.323 Test: blockdev writev readv 8 blocks ...passed 00:16:21.323 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.323 Test: blockdev writev readv block ...passed 00:16:21.323 Test: blockdev writev readv size > 128k ...passed 00:16:21.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.323 Test: blockdev comparev and writev ...[2024-07-25 09:30:21.804494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x27900e000 len:0x1000 00:16:21.323 [2024-07-25 09:30:21.804604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:16:21.323 passed 00:16:21.323 Test: blockdev nvme passthru rw ...passed 00:16:21.323 Test: blockdev nvme passthru vendor specific ...passed 00:16:21.323 Test: blockdev nvme admin passthru ...passed 00:16:21.323 Test: blockdev copy ...passed 00:16:21.323 Suite: bdevio tests on: Nvme0n1 00:16:21.323 Test: blockdev write read block ...passed 00:16:21.323 Test: blockdev write zeroes read block ...passed 00:16:21.323 Test: blockdev write zeroes read no split ...passed 00:16:21.323 Test: blockdev write zeroes read split ...passed 00:16:21.323 Test: blockdev write zeroes read split partial ...passed 00:16:21.323 Test: blockdev reset ...[2024-07-25 09:30:21.877030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:16:21.323 [2024-07-25 09:30:21.881535] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:21.323 passed 00:16:21.323 Test: blockdev write read 8 blocks ...passed 00:16:21.323 Test: blockdev write read size > 128k ...passed 00:16:21.323 Test: blockdev write read invalid size ...passed 00:16:21.323 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:21.323 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:21.323 Test: blockdev write read max offset ...passed 00:16:21.323 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:21.323 Test: blockdev writev readv 8 blocks ...passed 00:16:21.323 Test: blockdev writev readv 30 x 1block ...passed 00:16:21.323 Test: blockdev writev readv block ...passed 00:16:21.323 Test: blockdev writev readv size > 128k ...passed 00:16:21.323 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:21.323 Test: blockdev comparev and writev ...passed 00:16:21.323 Test: blockdev nvme passthru rw ...[2024-07-25 09:30:21.888819] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:16:21.323 separate metadata which is not supported yet. 00:16:21.323 passed 00:16:21.323 Test: blockdev nvme passthru vendor specific ...passed 00:16:21.323 Test: blockdev nvme admin passthru ...[2024-07-25 09:30:21.889334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:16:21.323 [2024-07-25 09:30:21.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:16:21.323 passed 00:16:21.323 Test: blockdev copy ...passed 00:16:21.323 00:16:21.323 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.323 suites 7 7 n/a 0 0 00:16:21.323 tests 161 161 161 0 0 00:16:21.323 asserts 1025 1025 1025 0 n/a 00:16:21.323 00:16:21.323 Elapsed time = 1.752 seconds 00:16:21.323 0 00:16:21.323 09:30:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 67067 00:16:21.323 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 67067 ']' 00:16:21.323 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 67067 00:16:21.323 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:16:21.323 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:21.582 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67067 00:16:21.582 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:21.582 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:21.582 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67067' 00:16:21.582 killing process with pid 67067 00:16:21.582 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 67067 00:16:21.583 09:30:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 67067 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:22.529 00:16:22.529 real 0m3.058s 00:16:22.529 user 0m7.534s 00:16:22.529 sys 0m0.387s 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:22.529 ************************************ 00:16:22.529 END TEST bdev_bounds 00:16:22.529 ************************************ 00:16:22.529 09:30:23 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:16:22.529 09:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:22.529 09:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:22.529 09:30:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:22.529 ************************************ 00:16:22.529 START TEST bdev_nbd 00:16:22.529 ************************************ 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=67132 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 67132 /var/tmp/spdk-nbd.sock 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 67132 ']' 00:16:22.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.529 09:30:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:22.800 [2024-07-25 09:30:23.192413] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:22.800 [2024-07-25 09:30:23.192525] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.800 [2024-07-25 09:30:23.351586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.059 [2024-07-25 09:30:23.570316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:23.999 1+0 records in 00:16:23.999 1+0 records out 00:16:23.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861057 s, 4.8 MB/s 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:23.999 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.259 1+0 records in 00:16:24.259 1+0 records out 00:16:24.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051217 s, 8.0 MB/s 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:24.259 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.517 1+0 records in 00:16:24.517 1+0 records out 00:16:24.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650839 s, 6.3 MB/s 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:24.517 09:30:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.774 1+0 records in 00:16:24.774 1+0 records out 00:16:24.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000784677 s, 5.2 MB/s 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:24.774 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.033 1+0 records in 00:16:25.033 1+0 records out 00:16:25.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000595523 s, 6.9 MB/s 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:25.033 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:25.291 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.292 1+0 records in 00:16:25.292 1+0 records out 00:16:25.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618954 s, 6.6 MB/s 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:25.292 1+0 records in 00:16:25.292 1+0 records out 00:16:25.292 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559909 s, 7.3 MB/s 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:16:25.292 09:30:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd0", 00:16:25.551 "bdev_name": "Nvme0n1" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd1", 00:16:25.551 "bdev_name": "Nvme1n1p1" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd2", 00:16:25.551 "bdev_name": "Nvme1n1p2" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd3", 00:16:25.551 "bdev_name": "Nvme2n1" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd4", 00:16:25.551 "bdev_name": "Nvme2n2" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd5", 00:16:25.551 "bdev_name": "Nvme2n3" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd6", 00:16:25.551 "bdev_name": "Nvme3n1" 00:16:25.551 } 00:16:25.551 ]' 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd0", 00:16:25.551 "bdev_name": "Nvme0n1" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd1", 00:16:25.551 "bdev_name": "Nvme1n1p1" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd2", 00:16:25.551 "bdev_name": "Nvme1n1p2" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd3", 00:16:25.551 "bdev_name": "Nvme2n1" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd4", 00:16:25.551 "bdev_name": "Nvme2n2" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd5", 00:16:25.551 "bdev_name": "Nvme2n3" 00:16:25.551 }, 00:16:25.551 { 00:16:25.551 "nbd_device": "/dev/nbd6", 00:16:25.551 "bdev_name": "Nvme3n1" 00:16:25.551 } 00:16:25.551 ]' 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.551 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:25.809 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.068 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.327 09:30:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.587 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:16:26.846 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:27.106 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:16:27.364 /dev/nbd0 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.364 1+0 records in 00:16:27.364 1+0 records out 00:16:27.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072145 s, 5.7 MB/s 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:27.364 09:30:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:16:27.623 /dev/nbd1 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.623 1+0 records in 00:16:27.623 1+0 records out 00:16:27.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691162 s, 5.9 MB/s 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:27.623 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:16:27.882 /dev/nbd10 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.882 1+0 records in 00:16:27.882 1+0 records out 00:16:27.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610082 s, 6.7 MB/s 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:27.882 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:16:28.168 /dev/nbd11 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.168 1+0 records in 00:16:28.168 1+0 records out 00:16:28.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519749 s, 7.9 MB/s 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:28.168 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:16:28.168 /dev/nbd12 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.427 1+0 records in 00:16:28.427 1+0 records out 00:16:28.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861119 s, 4.8 MB/s 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:28.427 09:30:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:16:28.427 /dev/nbd13 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.427 1+0 records in 00:16:28.427 1+0 records out 00:16:28.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796571 s, 5.1 MB/s 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:28.427 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:16:28.687 /dev/nbd14 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.687 1+0 records in 00:16:28.687 1+0 records out 00:16:28.687 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898045 s, 4.6 MB/s 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:28.687 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd0", 00:16:28.947 "bdev_name": "Nvme0n1" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd1", 00:16:28.947 "bdev_name": "Nvme1n1p1" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd10", 00:16:28.947 "bdev_name": "Nvme1n1p2" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd11", 00:16:28.947 "bdev_name": "Nvme2n1" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd12", 00:16:28.947 "bdev_name": "Nvme2n2" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd13", 00:16:28.947 "bdev_name": "Nvme2n3" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd14", 00:16:28.947 "bdev_name": "Nvme3n1" 00:16:28.947 } 00:16:28.947 ]' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd0", 00:16:28.947 "bdev_name": "Nvme0n1" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd1", 00:16:28.947 "bdev_name": "Nvme1n1p1" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd10", 00:16:28.947 "bdev_name": "Nvme1n1p2" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd11", 00:16:28.947 "bdev_name": "Nvme2n1" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd12", 00:16:28.947 "bdev_name": "Nvme2n2" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd13", 00:16:28.947 "bdev_name": "Nvme2n3" 00:16:28.947 }, 00:16:28.947 { 00:16:28.947 "nbd_device": "/dev/nbd14", 00:16:28.947 "bdev_name": "Nvme3n1" 00:16:28.947 } 00:16:28.947 ]' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:28.947 /dev/nbd1 00:16:28.947 /dev/nbd10 00:16:28.947 /dev/nbd11 00:16:28.947 /dev/nbd12 00:16:28.947 /dev/nbd13 00:16:28.947 /dev/nbd14' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:28.947 /dev/nbd1 00:16:28.947 /dev/nbd10 00:16:28.947 /dev/nbd11 00:16:28.947 /dev/nbd12 00:16:28.947 /dev/nbd13 00:16:28.947 /dev/nbd14' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:28.947 256+0 records in 00:16:28.947 256+0 records out 00:16:28.947 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048522 s, 216 MB/s 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:28.947 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:29.206 256+0 records in 00:16:29.206 256+0 records out 00:16:29.206 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0951227 s, 11.0 MB/s 00:16:29.206 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.206 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:29.206 256+0 records in 00:16:29.206 256+0 records out 00:16:29.207 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105883 s, 9.9 MB/s 00:16:29.207 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.207 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:29.466 256+0 records in 00:16:29.466 256+0 records out 00:16:29.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.103301 s, 10.2 MB/s 00:16:29.466 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.466 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:29.466 256+0 records in 00:16:29.466 256+0 records out 00:16:29.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113248 s, 9.3 MB/s 00:16:29.466 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.466 09:30:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:29.466 256+0 records in 00:16:29.466 256+0 records out 00:16:29.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0991897 s, 10.6 MB/s 00:16:29.466 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.466 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:29.725 256+0 records in 00:16:29.725 256+0 records out 00:16:29.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0995668 s, 10.5 MB/s 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:16:29.725 256+0 records in 00:16:29.725 256+0 records out 00:16:29.725 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.092011 s, 11.4 MB/s 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.725 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:29.726 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.985 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:30.244 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:30.244 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:30.244 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.245 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:30.504 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.505 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.505 09:30:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:30.763 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:31.021 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.279 09:30:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:16:31.537 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:31.795 malloc_lvol_verify 00:16:31.795 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:32.053 9249c559-15a8-4a7a-98d0-a1abff0b1303 00:16:32.053 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:32.053 35628ebe-8bae-4f64-b541-5dead8200e9a 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:32.312 /dev/nbd0 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:16:32.312 Discarding device blocks: mke2fs 1.46.5 (30-Dec-2021) 00:16:32.312 0/4096 done 00:16:32.312 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:32.312 00:16:32.312 Allocating group tables: 0/1 done 00:16:32.312 Writing inode tables: 0/1 done 00:16:32.312 Creating journal (1024 blocks): done 00:16:32.312 Writing superblocks and filesystem accounting information: 0/1 done 00:16:32.312 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.312 09:30:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 67132 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 67132 ']' 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 67132 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67132 00:16:32.571 killing process with pid 67132 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:32.571 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67132' 00:16:32.572 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 67132 00:16:32.572 09:30:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 67132 00:16:33.950 ************************************ 00:16:33.950 END TEST bdev_nbd 00:16:33.950 ************************************ 00:16:33.950 09:30:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:33.950 00:16:33.950 real 0m11.398s 00:16:33.950 user 0m15.371s 00:16:33.950 sys 0m3.922s 00:16:33.950 09:30:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:33.950 09:30:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:33.950 09:30:34 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:16:33.950 09:30:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:16:33.950 09:30:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:16:33.950 skipping fio tests on NVMe due to multi-ns failures. 00:16:33.950 09:30:34 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:16:33.950 09:30:34 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:33.950 09:30:34 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:33.950 09:30:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:33.950 09:30:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:33.950 09:30:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:33.950 ************************************ 00:16:33.950 START TEST bdev_verify 00:16:33.950 ************************************ 00:16:33.950 09:30:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:34.210 [2024-07-25 09:30:34.640427] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:34.210 [2024-07-25 09:30:34.640527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67555 ] 00:16:34.210 [2024-07-25 09:30:34.803007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:34.469 [2024-07-25 09:30:35.017661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.469 [2024-07-25 09:30:35.017699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.407 Running I/O for 5 seconds... 00:16:40.680 00:16:40.680 Latency(us) 00:16:40.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.680 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0xbd0bd 00:16:40.680 Nvme0n1 : 5.05 1446.04 5.65 0.00 0.00 88174.55 20948.63 79673.46 00:16:40.680 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:40.680 Nvme0n1 : 5.06 1402.97 5.48 0.00 0.00 90741.03 12305.89 84252.39 00:16:40.680 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0x4ff80 00:16:40.680 Nvme1n1p1 : 5.05 1445.57 5.65 0.00 0.00 88041.63 20376.26 74636.63 00:16:40.680 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x4ff80 length 0x4ff80 00:16:40.680 Nvme1n1p1 : 5.08 1410.94 5.51 0.00 0.00 90383.95 13107.20 82878.71 00:16:40.680 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0x4ff7f 00:16:40.680 Nvme1n1p2 : 5.07 1452.06 5.67 0.00 0.00 87579.02 8814.45 73262.95 00:16:40.680 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:16:40.680 Nvme1n1p2 : 5.08 1410.51 5.51 0.00 0.00 90201.86 12076.94 84710.29 00:16:40.680 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0x80000 00:16:40.680 Nvme2n1 : 5.07 1451.52 5.67 0.00 0.00 87477.18 9043.40 70515.59 00:16:40.680 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x80000 length 0x80000 00:16:40.680 Nvme2n1 : 5.08 1409.70 5.51 0.00 0.00 90084.02 13851.28 86541.86 00:16:40.680 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0x80000 00:16:40.680 Nvme2n2 : 5.07 1451.07 5.67 0.00 0.00 87362.45 8699.98 72805.06 00:16:40.680 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x80000 length 0x80000 00:16:40.680 Nvme2n2 : 5.09 1408.94 5.50 0.00 0.00 89955.81 15682.85 86999.76 00:16:40.680 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0x80000 00:16:40.680 Nvme2n3 : 5.08 1460.86 5.71 0.00 0.00 86784.77 6267.42 75094.53 00:16:40.680 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x80000 length 0x80000 00:16:40.680 Nvme2n3 : 5.09 1408.65 5.50 0.00 0.00 89806.46 15453.90 86541.86 00:16:40.680 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x0 length 0x20000 00:16:40.680 Nvme3n1 : 5.08 1460.03 5.70 0.00 0.00 86637.64 7726.95 77383.99 00:16:40.680 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.680 Verification LBA range: start 0x20000 length 0x20000 00:16:40.680 Nvme3n1 : 5.09 1408.35 5.50 0.00 0.00 89675.83 15110.48 86083.97 00:16:40.680 =================================================================================================================== 00:16:40.680 Total : 20027.20 78.23 0.00 0.00 88758.27 6267.42 86999.76 00:16:42.061 00:16:42.061 real 0m7.901s 00:16:42.061 user 0m14.450s 00:16:42.061 sys 0m0.265s 00:16:42.061 09:30:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:42.061 09:30:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:42.061 ************************************ 00:16:42.061 END TEST bdev_verify 00:16:42.061 ************************************ 00:16:42.061 09:30:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:42.061 09:30:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:16:42.061 09:30:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:42.061 09:30:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:42.061 ************************************ 00:16:42.061 START TEST bdev_verify_big_io 00:16:42.061 ************************************ 00:16:42.061 09:30:42 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:42.061 [2024-07-25 09:30:42.605364] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:42.061 [2024-07-25 09:30:42.605480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67654 ] 00:16:42.321 [2024-07-25 09:30:42.767498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.580 [2024-07-25 09:30:42.985103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.580 [2024-07-25 09:30:42.985139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.518 Running I/O for 5 seconds... 00:16:50.090 00:16:50.090 Latency(us) 00:16:50.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.090 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0xbd0b 00:16:50.090 Nvme0n1 : 5.73 128.51 8.03 0.00 0.00 960212.05 29763.07 945092.08 00:16:50.090 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:50.090 Nvme0n1 : 5.76 116.60 7.29 0.00 0.00 1058294.65 21520.99 1465259.04 00:16:50.090 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0x4ff8 00:16:50.090 Nvme1n1p1 : 5.77 133.47 8.34 0.00 0.00 914428.68 58152.47 952418.38 00:16:50.090 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x4ff8 length 0x4ff8 00:16:50.090 Nvme1n1p1 : 5.76 128.72 8.04 0.00 0.00 941707.25 86999.76 959744.67 00:16:50.090 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0x4ff7 00:16:50.090 Nvme1n1p2 : 5.73 133.99 8.37 0.00 0.00 897836.94 76010.31 952418.38 00:16:50.090 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x4ff7 length 0x4ff7 00:16:50.090 Nvme1n1p2 : 5.77 128.51 8.03 0.00 0.00 927858.97 103026.03 981723.56 00:16:50.090 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0x8000 00:16:50.090 Nvme2n1 : 5.77 133.45 8.34 0.00 0.00 873307.17 77383.99 959744.67 00:16:50.090 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x8000 length 0x8000 00:16:50.090 Nvme2n1 : 5.77 127.97 8.00 0.00 0.00 902241.82 96615.52 989049.85 00:16:50.090 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0x8000 00:16:50.090 Nvme2n2 : 5.77 133.07 8.32 0.00 0.00 852424.10 59984.04 967070.97 00:16:50.090 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x8000 length 0x8000 00:16:50.090 Nvme2n2 : 5.83 136.44 8.53 0.00 0.00 835756.63 28274.92 926776.34 00:16:50.090 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0x8000 00:16:50.090 Nvme2n3 : 5.80 143.08 8.94 0.00 0.00 779929.84 26099.93 974397.26 00:16:50.090 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x8000 length 0x8000 00:16:50.090 Nvme2n3 : 5.86 135.27 8.45 0.00 0.00 821566.79 28274.92 1831573.80 00:16:50.090 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x0 length 0x2000 00:16:50.090 Nvme3n1 : 5.84 157.52 9.84 0.00 0.00 692612.61 3505.75 974397.26 00:16:50.090 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:50.090 Verification LBA range: start 0x2000 length 0x2000 00:16:50.090 Nvme3n1 : 5.88 149.60 9.35 0.00 0.00 725885.23 8528.27 1860878.98 00:16:50.090 =================================================================================================================== 00:16:50.090 Total : 1886.19 117.89 0.00 0.00 863336.49 3505.75 1860878.98 00:16:51.470 00:16:51.470 real 0m9.494s 00:16:51.470 user 0m17.583s 00:16:51.470 sys 0m0.311s 00:16:51.470 09:30:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.470 09:30:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.470 ************************************ 00:16:51.470 END TEST bdev_verify_big_io 00:16:51.470 ************************************ 00:16:51.470 09:30:52 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:51.470 09:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:51.470 09:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.470 09:30:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:51.470 ************************************ 00:16:51.470 START TEST bdev_write_zeroes 00:16:51.470 ************************************ 00:16:51.470 09:30:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:51.730 [2024-07-25 09:30:52.160138] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:51.730 [2024-07-25 09:30:52.160264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67781 ] 00:16:51.730 [2024-07-25 09:30:52.322197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.989 [2024-07-25 09:30:52.541090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.925 Running I/O for 1 seconds... 00:16:53.859 00:16:53.859 Latency(us) 00:16:53.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.859 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme0n1 : 1.02 9065.29 35.41 0.00 0.00 14076.62 10245.37 32739.38 00:16:53.859 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme1n1p1 : 1.02 9053.86 35.37 0.00 0.00 14072.92 10474.31 32281.49 00:16:53.859 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme1n1p2 : 1.02 9043.19 35.32 0.00 0.00 14020.62 10474.31 31823.59 00:16:53.859 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme2n1 : 1.02 9075.74 35.45 0.00 0.00 13934.56 8585.50 26901.24 00:16:53.859 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme2n2 : 1.02 9065.18 35.41 0.00 0.00 13901.72 8986.16 24726.25 00:16:53.859 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme2n3 : 1.03 9108.72 35.58 0.00 0.00 13818.05 4550.32 23009.15 00:16:53.859 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:53.859 Nvme3n1 : 1.03 9100.68 35.55 0.00 0.00 13794.24 4636.17 21864.41 00:16:53.859 =================================================================================================================== 00:16:53.859 Total : 63512.68 248.10 0.00 0.00 13944.93 4550.32 32739.38 00:16:55.240 00:16:55.240 real 0m3.479s 00:16:55.240 user 0m3.125s 00:16:55.240 sys 0m0.241s 00:16:55.240 09:30:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:55.240 09:30:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:55.240 ************************************ 00:16:55.240 END TEST bdev_write_zeroes 00:16:55.240 ************************************ 00:16:55.240 09:30:55 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.240 09:30:55 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:55.240 09:30:55 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:55.240 09:30:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:55.240 ************************************ 00:16:55.240 START TEST bdev_json_nonenclosed 00:16:55.240 ************************************ 00:16:55.240 09:30:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:55.240 [2024-07-25 09:30:55.701200] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:55.241 [2024-07-25 09:30:55.701327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67834 ] 00:16:55.499 [2024-07-25 09:30:55.863291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.499 [2024-07-25 09:30:56.083748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.499 [2024-07-25 09:30:56.083851] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:55.499 [2024-07-25 09:30:56.083870] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:55.499 [2024-07-25 09:30:56.083882] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:56.069 00:16:56.069 real 0m0.895s 00:16:56.069 user 0m0.657s 00:16:56.069 sys 0m0.132s 00:16:56.069 09:30:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:56.069 09:30:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:56.069 ************************************ 00:16:56.069 END TEST bdev_json_nonenclosed 00:16:56.069 ************************************ 00:16:56.069 09:30:56 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:56.069 09:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:16:56.069 09:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:56.069 09:30:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:56.069 ************************************ 00:16:56.069 START TEST bdev_json_nonarray 00:16:56.069 ************************************ 00:16:56.069 09:30:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:56.069 [2024-07-25 09:30:56.655275] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:56.069 [2024-07-25 09:30:56.655385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67865 ] 00:16:56.328 [2024-07-25 09:30:56.817204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.588 [2024-07-25 09:30:57.035051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.588 [2024-07-25 09:30:57.035152] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:56.588 [2024-07-25 09:30:57.035171] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:56.588 [2024-07-25 09:30:57.035184] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:57.171 00:16:57.171 real 0m0.894s 00:16:57.171 user 0m0.665s 00:16:57.171 sys 0m0.123s 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:57.171 ************************************ 00:16:57.171 END TEST bdev_json_nonarray 00:16:57.171 ************************************ 00:16:57.171 09:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:16:57.171 09:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:16:57.171 09:30:57 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:16:57.171 09:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:57.171 09:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.171 09:30:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:16:57.171 ************************************ 00:16:57.171 START TEST bdev_gpt_uuid 00:16:57.171 ************************************ 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67896 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 67896 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 67896 ']' 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.171 09:30:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:57.171 [2024-07-25 09:30:57.627516] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:57.171 [2024-07-25 09:30:57.627634] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67896 ] 00:16:57.430 [2024-07-25 09:30:57.788888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.430 [2024-07-25 09:30:58.009257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.369 09:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.369 09:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:16:58.369 09:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:58.369 09:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.369 09:30:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:58.629 Some configs were skipped because the RPC state that can call them passed over. 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:16:58.629 { 00:16:58.629 "name": "Nvme1n1p1", 00:16:58.629 "aliases": [ 00:16:58.629 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:16:58.629 ], 00:16:58.629 "product_name": "GPT Disk", 00:16:58.629 "block_size": 4096, 00:16:58.629 "num_blocks": 655104, 00:16:58.629 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:16:58.629 "assigned_rate_limits": { 00:16:58.629 "rw_ios_per_sec": 0, 00:16:58.629 "rw_mbytes_per_sec": 0, 00:16:58.629 "r_mbytes_per_sec": 0, 00:16:58.629 "w_mbytes_per_sec": 0 00:16:58.629 }, 00:16:58.629 "claimed": false, 00:16:58.629 "zoned": false, 00:16:58.629 "supported_io_types": { 00:16:58.629 "read": true, 00:16:58.629 "write": true, 00:16:58.629 "unmap": true, 00:16:58.629 "flush": true, 00:16:58.629 "reset": true, 00:16:58.629 "nvme_admin": false, 00:16:58.629 "nvme_io": false, 00:16:58.629 "nvme_io_md": false, 00:16:58.629 "write_zeroes": true, 00:16:58.629 "zcopy": false, 00:16:58.629 "get_zone_info": false, 00:16:58.629 "zone_management": false, 00:16:58.629 "zone_append": false, 00:16:58.629 "compare": true, 00:16:58.629 "compare_and_write": false, 00:16:58.629 "abort": true, 00:16:58.629 "seek_hole": false, 00:16:58.629 "seek_data": false, 00:16:58.629 "copy": true, 00:16:58.629 "nvme_iov_md": false 00:16:58.629 }, 00:16:58.629 "driver_specific": { 00:16:58.629 "gpt": { 00:16:58.629 "base_bdev": "Nvme1n1", 00:16:58.629 "offset_blocks": 256, 00:16:58.629 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:16:58.629 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:16:58.629 "partition_name": "SPDK_TEST_first" 00:16:58.629 } 00:16:58.629 } 00:16:58.629 } 00:16:58.629 ]' 00:16:58.629 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:16:58.889 { 00:16:58.889 "name": "Nvme1n1p2", 00:16:58.889 "aliases": [ 00:16:58.889 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:16:58.889 ], 00:16:58.889 "product_name": "GPT Disk", 00:16:58.889 "block_size": 4096, 00:16:58.889 "num_blocks": 655103, 00:16:58.889 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:16:58.889 "assigned_rate_limits": { 00:16:58.889 "rw_ios_per_sec": 0, 00:16:58.889 "rw_mbytes_per_sec": 0, 00:16:58.889 "r_mbytes_per_sec": 0, 00:16:58.889 "w_mbytes_per_sec": 0 00:16:58.889 }, 00:16:58.889 "claimed": false, 00:16:58.889 "zoned": false, 00:16:58.889 "supported_io_types": { 00:16:58.889 "read": true, 00:16:58.889 "write": true, 00:16:58.889 "unmap": true, 00:16:58.889 "flush": true, 00:16:58.889 "reset": true, 00:16:58.889 "nvme_admin": false, 00:16:58.889 "nvme_io": false, 00:16:58.889 "nvme_io_md": false, 00:16:58.889 "write_zeroes": true, 00:16:58.889 "zcopy": false, 00:16:58.889 "get_zone_info": false, 00:16:58.889 "zone_management": false, 00:16:58.889 "zone_append": false, 00:16:58.889 "compare": true, 00:16:58.889 "compare_and_write": false, 00:16:58.889 "abort": true, 00:16:58.889 "seek_hole": false, 00:16:58.889 "seek_data": false, 00:16:58.889 "copy": true, 00:16:58.889 "nvme_iov_md": false 00:16:58.889 }, 00:16:58.889 "driver_specific": { 00:16:58.889 "gpt": { 00:16:58.889 "base_bdev": "Nvme1n1", 00:16:58.889 "offset_blocks": 655360, 00:16:58.889 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:16:58.889 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:16:58.889 "partition_name": "SPDK_TEST_second" 00:16:58.889 } 00:16:58.889 } 00:16:58.889 } 00:16:58.889 ]' 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:16:58.889 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 67896 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 67896 ']' 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 67896 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67896 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:59.149 killing process with pid 67896 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67896' 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 67896 00:16:59.149 09:30:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 67896 00:17:01.689 00:17:01.689 real 0m4.364s 00:17:01.689 user 0m4.493s 00:17:01.689 sys 0m0.462s 00:17:01.689 09:31:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:01.689 09:31:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:17:01.689 ************************************ 00:17:01.689 END TEST bdev_gpt_uuid 00:17:01.689 ************************************ 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:17:01.689 09:31:01 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:01.948 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:02.207 Waiting for block devices as requested 00:17:02.207 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.467 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.467 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:17:02.467 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:17:07.742 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:17:07.742 09:31:08 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:17:07.742 09:31:08 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:17:07.742 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:17:07.742 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:17:07.742 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:17:07.742 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:17:07.742 09:31:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:17:07.742 00:17:07.742 real 1m5.172s 00:17:07.742 user 1m21.705s 00:17:07.742 sys 0m9.908s 00:17:07.742 09:31:08 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:07.742 09:31:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:17:07.742 ************************************ 00:17:07.742 END TEST blockdev_nvme_gpt 00:17:07.742 ************************************ 00:17:08.002 09:31:08 -- spdk/autotest.sh@220 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:17:08.002 09:31:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:08.002 09:31:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.002 09:31:08 -- common/autotest_common.sh@10 -- # set +x 00:17:08.002 ************************************ 00:17:08.002 START TEST nvme 00:17:08.002 ************************************ 00:17:08.002 09:31:08 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:17:08.002 * Looking for test storage... 00:17:08.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:17:08.002 09:31:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:08.570 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:09.508 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.508 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.508 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.508 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:17:09.508 09:31:10 nvme -- nvme/nvme.sh@79 -- # uname 00:17:09.508 09:31:10 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:17:09.508 09:31:10 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:17:09.508 09:31:10 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1071 -- # stubpid=68546 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:17:09.508 Waiting for stub to ready for secondary processes... 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68546 ]] 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:17:09.508 09:31:10 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:17:09.508 [2024-07-25 09:31:10.098366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:09.508 [2024-07-25 09:31:10.098482] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:17:10.449 09:31:11 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:17:10.449 09:31:11 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/68546 ]] 00:17:10.449 09:31:11 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:17:10.708 [2024-07-25 09:31:11.070058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:10.708 [2024-07-25 09:31:11.275041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.708 [2024-07-25 09:31:11.275179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.708 [2024-07-25 09:31:11.275223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:10.708 [2024-07-25 09:31:11.291204] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:17:10.708 [2024-07-25 09:31:11.291261] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:17:10.708 [2024-07-25 09:31:11.306223] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:17:10.708 [2024-07-25 09:31:11.307095] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:17:10.708 [2024-07-25 09:31:11.316181] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:17:10.708 [2024-07-25 09:31:11.316665] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:17:10.708 [2024-07-25 09:31:11.316815] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:17:10.967 [2024-07-25 09:31:11.322038] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:17:10.967 [2024-07-25 09:31:11.322362] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:17:10.967 [2024-07-25 09:31:11.322528] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:17:10.967 [2024-07-25 09:31:11.327459] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:17:10.967 [2024-07-25 09:31:11.327666] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:17:10.967 [2024-07-25 09:31:11.327760] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:17:10.967 [2024-07-25 09:31:11.327844] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:17:10.967 [2024-07-25 09:31:11.327940] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:17:11.536 09:31:12 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:17:11.536 done. 00:17:11.536 09:31:12 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:17:11.536 09:31:12 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:17:11.536 09:31:12 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:17:11.536 09:31:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.536 09:31:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.536 ************************************ 00:17:11.536 START TEST nvme_reset 00:17:11.536 ************************************ 00:17:11.536 09:31:12 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:17:11.796 Initializing NVMe Controllers 00:17:11.796 Skipping QEMU NVMe SSD at 0000:00:10.0 00:17:11.796 Skipping QEMU NVMe SSD at 0000:00:11.0 00:17:11.796 Skipping QEMU NVMe SSD at 0000:00:13.0 00:17:11.796 Skipping QEMU NVMe SSD at 0000:00:12.0 00:17:11.796 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:17:11.796 00:17:11.796 real 0m0.225s 00:17:11.796 user 0m0.078s 00:17:11.796 sys 0m0.108s 00:17:11.796 09:31:12 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.796 09:31:12 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:17:11.796 ************************************ 00:17:11.796 END TEST nvme_reset 00:17:11.796 ************************************ 00:17:11.796 09:31:12 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:17:11.796 09:31:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:11.796 09:31:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.796 09:31:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:11.796 ************************************ 00:17:11.796 START TEST nvme_identify 00:17:11.796 ************************************ 00:17:11.796 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:17:11.796 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:17:11.796 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:17:11.796 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:17:11.796 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:17:11.796 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:17:11.796 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:17:11.796 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:11.796 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:11.796 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:17:12.058 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:17:12.058 09:31:12 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:12.058 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:17:12.058 [2024-07-25 09:31:12.641809] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 68579 terminated unexpected 00:17:12.058 ===================================================== 00:17:12.058 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:12.058 ===================================================== 00:17:12.058 Controller Capabilities/Features 00:17:12.058 ================================ 00:17:12.058 Vendor ID: 1b36 00:17:12.058 Subsystem Vendor ID: 1af4 00:17:12.058 Serial Number: 12340 00:17:12.058 Model Number: QEMU NVMe Ctrl 00:17:12.058 Firmware Version: 8.0.0 00:17:12.058 Recommended Arb Burst: 6 00:17:12.058 IEEE OUI Identifier: 00 54 52 00:17:12.058 Multi-path I/O 00:17:12.058 May have multiple subsystem ports: No 00:17:12.058 May have multiple controllers: No 00:17:12.058 Associated with SR-IOV VF: No 00:17:12.058 Max Data Transfer Size: 524288 00:17:12.058 Max Number of Namespaces: 256 00:17:12.058 Max Number of I/O Queues: 64 00:17:12.058 NVMe Specification Version (VS): 1.4 00:17:12.058 NVMe Specification Version (Identify): 1.4 00:17:12.058 Maximum Queue Entries: 2048 00:17:12.058 Contiguous Queues Required: Yes 00:17:12.058 Arbitration Mechanisms Supported 00:17:12.058 Weighted Round Robin: Not Supported 00:17:12.058 Vendor Specific: Not Supported 00:17:12.058 Reset Timeout: 7500 ms 00:17:12.058 Doorbell Stride: 4 bytes 00:17:12.058 NVM Subsystem Reset: Not Supported 00:17:12.058 Command Sets Supported 00:17:12.058 NVM Command Set: Supported 00:17:12.058 Boot Partition: Not Supported 00:17:12.058 Memory Page Size Minimum: 4096 bytes 00:17:12.058 Memory Page Size Maximum: 65536 bytes 00:17:12.058 Persistent Memory Region: Not Supported 00:17:12.058 Optional Asynchronous Events Supported 00:17:12.058 Namespace Attribute Notices: Supported 00:17:12.058 Firmware Activation Notices: Not Supported 00:17:12.058 ANA Change Notices: Not Supported 00:17:12.058 PLE Aggregate Log Change Notices: Not Supported 00:17:12.058 LBA Status Info Alert Notices: Not Supported 00:17:12.058 EGE Aggregate Log Change Notices: Not Supported 00:17:12.058 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.058 Zone Descriptor Change Notices: Not Supported 00:17:12.058 Discovery Log Change Notices: Not Supported 00:17:12.058 Controller Attributes 00:17:12.058 128-bit Host Identifier: Not Supported 00:17:12.058 Non-Operational Permissive Mode: Not Supported 00:17:12.058 NVM Sets: Not Supported 00:17:12.058 Read Recovery Levels: Not Supported 00:17:12.058 Endurance Groups: Not Supported 00:17:12.058 Predictable Latency Mode: Not Supported 00:17:12.058 Traffic Based Keep ALive: Not Supported 00:17:12.058 Namespace Granularity: Not Supported 00:17:12.058 SQ Associations: Not Supported 00:17:12.058 UUID List: Not Supported 00:17:12.058 Multi-Domain Subsystem: Not Supported 00:17:12.058 Fixed Capacity Management: Not Supported 00:17:12.058 Variable Capacity Management: Not Supported 00:17:12.058 Delete Endurance Group: Not Supported 00:17:12.058 Delete NVM Set: Not Supported 00:17:12.058 Extended LBA Formats Supported: Supported 00:17:12.058 Flexible Data Placement Supported: Not Supported 00:17:12.058 00:17:12.058 Controller Memory Buffer Support 00:17:12.058 ================================ 00:17:12.058 Supported: No 00:17:12.058 00:17:12.058 Persistent Memory Region Support 00:17:12.058 ================================ 00:17:12.058 Supported: No 00:17:12.058 00:17:12.058 Admin Command Set Attributes 00:17:12.058 ============================ 00:17:12.058 Security Send/Receive: Not Supported 00:17:12.058 Format NVM: Supported 00:17:12.058 Firmware Activate/Download: Not Supported 00:17:12.058 Namespace Management: Supported 00:17:12.058 Device Self-Test: Not Supported 00:17:12.058 Directives: Supported 00:17:12.058 NVMe-MI: Not Supported 00:17:12.058 Virtualization Management: Not Supported 00:17:12.058 Doorbell Buffer Config: Supported 00:17:12.058 Get LBA Status Capability: Not Supported 00:17:12.058 Command & Feature Lockdown Capability: Not Supported 00:17:12.058 Abort Command Limit: 4 00:17:12.058 Async Event Request Limit: 4 00:17:12.058 Number of Firmware Slots: N/A 00:17:12.058 Firmware Slot 1 Read-Only: N/A 00:17:12.058 Firmware Activation Without Reset: N/A 00:17:12.058 Multiple Update Detection Support: N/A 00:17:12.058 Firmware Update Granularity: No Information Provided 00:17:12.058 Per-Namespace SMART Log: Yes 00:17:12.058 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.058 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:17:12.058 Command Effects Log Page: Supported 00:17:12.058 Get Log Page Extended Data: Supported 00:17:12.058 Telemetry Log Pages: Not Supported 00:17:12.058 Persistent Event Log Pages: Not Supported 00:17:12.058 Supported Log Pages Log Page: May Support 00:17:12.058 Commands Supported & Effects Log Page: Not Supported 00:17:12.058 Feature Identifiers & Effects Log Page:May Support 00:17:12.058 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.058 Data Area 4 for Telemetry Log: Not Supported 00:17:12.058 Error Log Page Entries Supported: 1 00:17:12.058 Keep Alive: Not Supported 00:17:12.058 00:17:12.058 NVM Command Set Attributes 00:17:12.058 ========================== 00:17:12.058 Submission Queue Entry Size 00:17:12.058 Max: 64 00:17:12.058 Min: 64 00:17:12.058 Completion Queue Entry Size 00:17:12.058 Max: 16 00:17:12.058 Min: 16 00:17:12.058 Number of Namespaces: 256 00:17:12.058 Compare Command: Supported 00:17:12.058 Write Uncorrectable Command: Not Supported 00:17:12.058 Dataset Management Command: Supported 00:17:12.058 Write Zeroes Command: Supported 00:17:12.058 Set Features Save Field: Supported 00:17:12.058 Reservations: Not Supported 00:17:12.058 Timestamp: Supported 00:17:12.058 Copy: Supported 00:17:12.058 Volatile Write Cache: Present 00:17:12.058 Atomic Write Unit (Normal): 1 00:17:12.058 Atomic Write Unit (PFail): 1 00:17:12.058 Atomic Compare & Write Unit: 1 00:17:12.058 Fused Compare & Write: Not Supported 00:17:12.058 Scatter-Gather List 00:17:12.058 SGL Command Set: Supported 00:17:12.058 SGL Keyed: Not Supported 00:17:12.058 SGL Bit Bucket Descriptor: Not Supported 00:17:12.058 SGL Metadata Pointer: Not Supported 00:17:12.059 Oversized SGL: Not Supported 00:17:12.059 SGL Metadata Address: Not Supported 00:17:12.059 SGL Offset: Not Supported 00:17:12.059 Transport SGL Data Block: Not Supported 00:17:12.059 Replay Protected Memory Block: Not Supported 00:17:12.059 00:17:12.059 Firmware Slot Information 00:17:12.059 ========================= 00:17:12.059 Active slot: 1 00:17:12.059 Slot 1 Firmware Revision: 1.0 00:17:12.059 00:17:12.059 00:17:12.059 Commands Supported and Effects 00:17:12.059 ============================== 00:17:12.059 Admin Commands 00:17:12.059 -------------- 00:17:12.059 Delete I/O Submission Queue (00h): Supported 00:17:12.059 Create I/O Submission Queue (01h): Supported 00:17:12.059 Get Log Page (02h): Supported 00:17:12.059 Delete I/O Completion Queue (04h): Supported 00:17:12.059 Create I/O Completion Queue (05h): Supported 00:17:12.059 Identify (06h): Supported 00:17:12.059 Abort (08h): Supported 00:17:12.059 Set Features (09h): Supported 00:17:12.059 Get Features (0Ah): Supported 00:17:12.059 Asynchronous Event Request (0Ch): Supported 00:17:12.059 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.059 Directive Send (19h): Supported 00:17:12.059 Directive Receive (1Ah): Supported 00:17:12.059 Virtualization Management (1Ch): Supported 00:17:12.059 Doorbell Buffer Config (7Ch): Supported 00:17:12.059 Format NVM (80h): Supported LBA-Change 00:17:12.059 I/O Commands 00:17:12.059 ------------ 00:17:12.059 Flush (00h): Supported LBA-Change 00:17:12.059 Write (01h): Supported LBA-Change 00:17:12.059 Read (02h): Supported 00:17:12.059 Compare (05h): Supported 00:17:12.059 Write Zeroes (08h): Supported LBA-Change 00:17:12.059 Dataset Management (09h): Supported LBA-Change 00:17:12.059 Unknown (0Ch): Supported 00:17:12.059 Unknown (12h): Supported 00:17:12.059 Copy (19h): Supported LBA-Change 00:17:12.059 Unknown (1Dh): Supported LBA-Change 00:17:12.059 00:17:12.059 Error Log 00:17:12.059 ========= 00:17:12.059 00:17:12.059 Arbitration 00:17:12.059 =========== 00:17:12.059 Arbitration Burst: no limit 00:17:12.059 00:17:12.059 Power Management 00:17:12.059 ================ 00:17:12.059 Number of Power States: 1 00:17:12.059 Current Power State: Power State #0 00:17:12.059 Power State #0: 00:17:12.059 Max Power: 25.00 W 00:17:12.059 Non-Operational State: Operational 00:17:12.059 Entry Latency: 16 microseconds 00:17:12.059 Exit Latency: 4 microseconds 00:17:12.059 Relative Read Throughput: 0 00:17:12.059 Relative Read Latency: 0 00:17:12.059 Relative Write Throughput: 0 00:17:12.059 Relative Write Latency: 0 00:17:12.059 Idle Power[2024-07-25 09:31:12.642873] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 68579 terminated unexpected 00:17:12.059 : Not Reported 00:17:12.059 Active Power: Not Reported 00:17:12.059 Non-Operational Permissive Mode: Not Supported 00:17:12.059 00:17:12.059 Health Information 00:17:12.059 ================== 00:17:12.059 Critical Warnings: 00:17:12.059 Available Spare Space: OK 00:17:12.059 Temperature: OK 00:17:12.059 Device Reliability: OK 00:17:12.059 Read Only: No 00:17:12.059 Volatile Memory Backup: OK 00:17:12.059 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.059 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.059 Available Spare: 0% 00:17:12.059 Available Spare Threshold: 0% 00:17:12.059 Life Percentage Used: 0% 00:17:12.059 Data Units Read: 776 00:17:12.059 Data Units Written: 668 00:17:12.059 Host Read Commands: 36246 00:17:12.059 Host Write Commands: 35284 00:17:12.059 Controller Busy Time: 0 minutes 00:17:12.059 Power Cycles: 0 00:17:12.059 Power On Hours: 0 hours 00:17:12.059 Unsafe Shutdowns: 0 00:17:12.059 Unrecoverable Media Errors: 0 00:17:12.059 Lifetime Error Log Entries: 0 00:17:12.059 Warning Temperature Time: 0 minutes 00:17:12.059 Critical Temperature Time: 0 minutes 00:17:12.059 00:17:12.059 Number of Queues 00:17:12.059 ================ 00:17:12.059 Number of I/O Submission Queues: 64 00:17:12.059 Number of I/O Completion Queues: 64 00:17:12.059 00:17:12.059 ZNS Specific Controller Data 00:17:12.059 ============================ 00:17:12.059 Zone Append Size Limit: 0 00:17:12.059 00:17:12.059 00:17:12.059 Active Namespaces 00:17:12.059 ================= 00:17:12.059 Namespace ID:1 00:17:12.059 Error Recovery Timeout: Unlimited 00:17:12.059 Command Set Identifier: NVM (00h) 00:17:12.059 Deallocate: Supported 00:17:12.059 Deallocated/Unwritten Error: Supported 00:17:12.059 Deallocated Read Value: All 0x00 00:17:12.059 Deallocate in Write Zeroes: Not Supported 00:17:12.059 Deallocated Guard Field: 0xFFFF 00:17:12.059 Flush: Supported 00:17:12.059 Reservation: Not Supported 00:17:12.059 Metadata Transferred as: Separate Metadata Buffer 00:17:12.059 Namespace Sharing Capabilities: Private 00:17:12.059 Size (in LBAs): 1548666 (5GiB) 00:17:12.059 Capacity (in LBAs): 1548666 (5GiB) 00:17:12.059 Utilization (in LBAs): 1548666 (5GiB) 00:17:12.059 Thin Provisioning: Not Supported 00:17:12.059 Per-NS Atomic Units: No 00:17:12.059 Maximum Single Source Range Length: 128 00:17:12.059 Maximum Copy Length: 128 00:17:12.059 Maximum Source Range Count: 128 00:17:12.059 NGUID/EUI64 Never Reused: No 00:17:12.059 Namespace Write Protected: No 00:17:12.059 Number of LBA Formats: 8 00:17:12.059 Current LBA Format: LBA Format #07 00:17:12.059 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.059 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.059 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.059 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.059 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.059 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.059 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.059 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.059 00:17:12.059 NVM Specific Namespace Data 00:17:12.059 =========================== 00:17:12.059 Logical Block Storage Tag Mask: 0 00:17:12.059 Protection Information Capabilities: 00:17:12.059 16b Guard Protection Information Storage Tag Support: No 00:17:12.059 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.059 Storage Tag Check Read Support: No 00:17:12.059 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.059 ===================================================== 00:17:12.059 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:12.059 ===================================================== 00:17:12.059 Controller Capabilities/Features 00:17:12.059 ================================ 00:17:12.059 Vendor ID: 1b36 00:17:12.059 Subsystem Vendor ID: 1af4 00:17:12.059 Serial Number: 12341 00:17:12.059 Model Number: QEMU NVMe Ctrl 00:17:12.059 Firmware Version: 8.0.0 00:17:12.059 Recommended Arb Burst: 6 00:17:12.059 IEEE OUI Identifier: 00 54 52 00:17:12.059 Multi-path I/O 00:17:12.059 May have multiple subsystem ports: No 00:17:12.059 May have multiple controllers: No 00:17:12.059 Associated with SR-IOV VF: No 00:17:12.059 Max Data Transfer Size: 524288 00:17:12.059 Max Number of Namespaces: 256 00:17:12.059 Max Number of I/O Queues: 64 00:17:12.059 NVMe Specification Version (VS): 1.4 00:17:12.059 NVMe Specification Version (Identify): 1.4 00:17:12.059 Maximum Queue Entries: 2048 00:17:12.059 Contiguous Queues Required: Yes 00:17:12.059 Arbitration Mechanisms Supported 00:17:12.059 Weighted Round Robin: Not Supported 00:17:12.059 Vendor Specific: Not Supported 00:17:12.059 Reset Timeout: 7500 ms 00:17:12.059 Doorbell Stride: 4 bytes 00:17:12.059 NVM Subsystem Reset: Not Supported 00:17:12.059 Command Sets Supported 00:17:12.059 NVM Command Set: Supported 00:17:12.059 Boot Partition: Not Supported 00:17:12.059 Memory Page Size Minimum: 4096 bytes 00:17:12.059 Memory Page Size Maximum: 65536 bytes 00:17:12.059 Persistent Memory Region: Not Supported 00:17:12.060 Optional Asynchronous Events Supported 00:17:12.060 Namespace Attribute Notices: Supported 00:17:12.060 Firmware Activation Notices: Not Supported 00:17:12.060 ANA Change Notices: Not Supported 00:17:12.060 PLE Aggregate Log Change Notices: Not Supported 00:17:12.060 LBA Status Info Alert Notices: Not Supported 00:17:12.060 EGE Aggregate Log Change Notices: Not Supported 00:17:12.060 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.060 Zone Descriptor Change Notices: Not Supported 00:17:12.060 Discovery Log Change Notices: Not Supported 00:17:12.060 Controller Attributes 00:17:12.060 128-bit Host Identifier: Not Supported 00:17:12.060 Non-Operational Permissive Mode: Not Supported 00:17:12.060 NVM Sets: Not Supported 00:17:12.060 Read Recovery Levels: Not Supported 00:17:12.060 Endurance Groups: Not Supported 00:17:12.060 Predictable Latency Mode: Not Supported 00:17:12.060 Traffic Based Keep ALive: Not Supported 00:17:12.060 Namespace Granularity: Not Supported 00:17:12.060 SQ Associations: Not Supported 00:17:12.060 UUID List: Not Supported 00:17:12.060 Multi-Domain Subsystem: Not Supported 00:17:12.060 Fixed Capacity Management: Not Supported 00:17:12.060 Variable Capacity Management: Not Supported 00:17:12.060 Delete Endurance Group: Not Supported 00:17:12.060 Delete NVM Set: Not Supported 00:17:12.060 Extended LBA Formats Supported: Supported 00:17:12.060 Flexible Data Placement Supported: Not Supported 00:17:12.060 00:17:12.060 Controller Memory Buffer Support 00:17:12.060 ================================ 00:17:12.060 Supported: No 00:17:12.060 00:17:12.060 Persistent Memory Region Support 00:17:12.060 ================================ 00:17:12.060 Supported: No 00:17:12.060 00:17:12.060 Admin Command Set Attributes 00:17:12.060 ============================ 00:17:12.060 Security Send/Receive: Not Supported 00:17:12.060 Format NVM: Supported 00:17:12.060 Firmware Activate/Download: Not Supported 00:17:12.060 Namespace Management: Supported 00:17:12.060 Device Self-Test: Not Supported 00:17:12.060 Directives: Supported 00:17:12.060 NVMe-MI: Not Supported 00:17:12.060 Virtualization Management: Not Supported 00:17:12.060 Doorbell Buffer Config: Supported 00:17:12.060 Get LBA Status Capability: Not Supported 00:17:12.060 Command & Feature Lockdown Capability: Not Supported 00:17:12.060 Abort Command Limit: 4 00:17:12.060 Async Event Request Limit: 4 00:17:12.060 Number of Firmware Slots: N/A 00:17:12.060 Firmware Slot 1 Read-Only: N/A 00:17:12.060 Firmware Activation Without Reset: N/A 00:17:12.060 Multiple Update Detection Support: N/A 00:17:12.060 Firmware Update Granularity: No Information Provided 00:17:12.060 Per-Namespace SMART Log: Yes 00:17:12.060 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.060 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:17:12.060 Command Effects Log Page: Supported 00:17:12.060 Get Log Page Extended Data: Supported 00:17:12.060 Telemetry Log Pages: Not Supported 00:17:12.060 Persistent Event Log Pages: Not Supported 00:17:12.060 Supported Log Pages Log Page: May Support 00:17:12.060 Commands Supported & Effects Log Page: Not Supported 00:17:12.060 Feature Identifiers & Effects Log Page:May Support 00:17:12.060 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.060 Data Area 4 for Telemetry Log: Not Supported 00:17:12.060 Error Log Page Entries Supported: 1 00:17:12.060 Keep Alive: Not Supported 00:17:12.060 00:17:12.060 NVM Command Set Attributes 00:17:12.060 ========================== 00:17:12.060 Submission Queue Entry Size 00:17:12.060 Max: 64 00:17:12.060 Min: 64 00:17:12.060 Completion Queue Entry Size 00:17:12.060 Max: 16 00:17:12.060 Min: 16 00:17:12.060 Number of Namespaces: 256 00:17:12.060 Compare Command: Supported 00:17:12.060 Write Uncorrectable Command: Not Supported 00:17:12.060 Dataset Management Command: Supported 00:17:12.060 Write Zeroes Command: Supported 00:17:12.060 Set Features Save Field: Supported 00:17:12.060 Reservations: Not Supported 00:17:12.060 Timestamp: Supported 00:17:12.060 Copy: Supported 00:17:12.060 Volatile Write Cache: Present 00:17:12.060 Atomic Write Unit (Normal): 1 00:17:12.060 Atomic Write Unit (PFail): 1 00:17:12.060 Atomic Compare & Write Unit: 1 00:17:12.060 Fused Compare & Write: Not Supported 00:17:12.060 Scatter-Gather List 00:17:12.060 SGL Command Set: Supported 00:17:12.060 SGL Keyed: Not Supported 00:17:12.060 SGL Bit Bucket Descriptor: Not Supported 00:17:12.060 SGL Metadata Pointer: Not Supported 00:17:12.060 Oversized SGL: Not Supported 00:17:12.060 SGL Metadata Address: Not Supported 00:17:12.060 SGL Offset: Not Supported 00:17:12.060 Transport SGL Data Block: Not Supported 00:17:12.060 Replay Protected Memory Block: Not Supported 00:17:12.060 00:17:12.060 Firmware Slot Information 00:17:12.060 ========================= 00:17:12.060 Active slot: 1 00:17:12.060 Slot 1 Firmware Revision: 1.0 00:17:12.060 00:17:12.060 00:17:12.060 Commands Supported and Effects 00:17:12.060 ============================== 00:17:12.060 Admin Commands 00:17:12.060 -------------- 00:17:12.060 Delete I/O Submission Queue (00h): Supported 00:17:12.060 Create I/O Submission Queue (01h): Supported 00:17:12.060 Get Log Page (02h): Supported 00:17:12.060 Delete I/O Completion Queue (04h): Supported 00:17:12.060 Create I/O Completion Queue (05h): Supported 00:17:12.060 Identify (06h): Supported 00:17:12.060 Abort (08h): Supported 00:17:12.060 Set Features (09h): Supported 00:17:12.060 Get Features (0Ah): Supported 00:17:12.060 Asynchronous Event Request (0Ch): Supported 00:17:12.060 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.060 Directive Send (19h): Supported 00:17:12.060 Directive Receive (1Ah): Supported 00:17:12.060 Virtualization Management (1Ch): Supported 00:17:12.060 Doorbell Buffer Config (7Ch): Supported 00:17:12.060 Format NVM (80h): Supported LBA-Change 00:17:12.060 I/O Commands 00:17:12.060 ------------ 00:17:12.060 Flush (00h): Supported LBA-Change 00:17:12.060 Write (01h): Supported LBA-Change 00:17:12.060 Read (02h): Supported 00:17:12.060 Compare (05h): Supported 00:17:12.060 Write Zeroes (08h): Supported LBA-Change 00:17:12.060 Dataset Management (09h): Supported LBA-Change 00:17:12.060 Unknown (0Ch): Supported 00:17:12.060 Unknown (12h): Supported 00:17:12.060 Copy (19h): Supported LBA-Change 00:17:12.060 Unknown (1Dh): Supported LBA-Change 00:17:12.060 00:17:12.060 Error Log 00:17:12.060 ========= 00:17:12.060 00:17:12.060 Arbitration 00:17:12.060 =========== 00:17:12.060 Arbitration Burst: no limit 00:17:12.060 00:17:12.060 Power Management 00:17:12.060 ================ 00:17:12.060 Number of Power States: 1 00:17:12.060 Current Power State: Power State #0 00:17:12.060 Power State #0: 00:17:12.060 Max Power: 25.00 W 00:17:12.060 Non-Operational State: Operational 00:17:12.060 Entry Latency: 16 microseconds 00:17:12.060 Exit Latency: 4 microseconds 00:17:12.060 Relative Read Throughput: 0 00:17:12.060 Relative Read Latency: 0 00:17:12.060 Relative Write Throughput: 0 00:17:12.060 Relative Write Latency: 0 00:17:12.060 Idle Power: Not Reported 00:17:12.060 Active Power: Not Reported 00:17:12.060 Non-Operational Permissive Mode: Not Supported 00:17:12.060 00:17:12.060 Health Information 00:17:12.060 ================== 00:17:12.060 Critical Warnings: 00:17:12.060 Available Spare Space: OK 00:17:12.060 Temperature: [2024-07-25 09:31:12.643459] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 68579 terminated unexpected 00:17:12.060 OK 00:17:12.060 Device Reliability: OK 00:17:12.060 Read Only: No 00:17:12.060 Volatile Memory Backup: OK 00:17:12.060 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.060 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.060 Available Spare: 0% 00:17:12.060 Available Spare Threshold: 0% 00:17:12.060 Life Percentage Used: 0% 00:17:12.060 Data Units Read: 1209 00:17:12.060 Data Units Written: 998 00:17:12.060 Host Read Commands: 54526 00:17:12.060 Host Write Commands: 51661 00:17:12.060 Controller Busy Time: 0 minutes 00:17:12.060 Power Cycles: 0 00:17:12.060 Power On Hours: 0 hours 00:17:12.060 Unsafe Shutdowns: 0 00:17:12.060 Unrecoverable Media Errors: 0 00:17:12.060 Lifetime Error Log Entries: 0 00:17:12.060 Warning Temperature Time: 0 minutes 00:17:12.060 Critical Temperature Time: 0 minutes 00:17:12.060 00:17:12.060 Number of Queues 00:17:12.060 ================ 00:17:12.060 Number of I/O Submission Queues: 64 00:17:12.061 Number of I/O Completion Queues: 64 00:17:12.061 00:17:12.061 ZNS Specific Controller Data 00:17:12.061 ============================ 00:17:12.061 Zone Append Size Limit: 0 00:17:12.061 00:17:12.061 00:17:12.061 Active Namespaces 00:17:12.061 ================= 00:17:12.061 Namespace ID:1 00:17:12.061 Error Recovery Timeout: Unlimited 00:17:12.061 Command Set Identifier: NVM (00h) 00:17:12.061 Deallocate: Supported 00:17:12.061 Deallocated/Unwritten Error: Supported 00:17:12.061 Deallocated Read Value: All 0x00 00:17:12.061 Deallocate in Write Zeroes: Not Supported 00:17:12.061 Deallocated Guard Field: 0xFFFF 00:17:12.061 Flush: Supported 00:17:12.061 Reservation: Not Supported 00:17:12.061 Namespace Sharing Capabilities: Private 00:17:12.061 Size (in LBAs): 1310720 (5GiB) 00:17:12.061 Capacity (in LBAs): 1310720 (5GiB) 00:17:12.061 Utilization (in LBAs): 1310720 (5GiB) 00:17:12.061 Thin Provisioning: Not Supported 00:17:12.061 Per-NS Atomic Units: No 00:17:12.061 Maximum Single Source Range Length: 128 00:17:12.061 Maximum Copy Length: 128 00:17:12.061 Maximum Source Range Count: 128 00:17:12.061 NGUID/EUI64 Never Reused: No 00:17:12.061 Namespace Write Protected: No 00:17:12.061 Number of LBA Formats: 8 00:17:12.061 Current LBA Format: LBA Format #04 00:17:12.061 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.061 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.061 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.061 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.061 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.061 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.061 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.061 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.061 00:17:12.061 NVM Specific Namespace Data 00:17:12.061 =========================== 00:17:12.061 Logical Block Storage Tag Mask: 0 00:17:12.061 Protection Information Capabilities: 00:17:12.061 16b Guard Protection Information Storage Tag Support: No 00:17:12.061 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.061 Storage Tag Check Read Support: No 00:17:12.061 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.061 ===================================================== 00:17:12.061 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:12.061 ===================================================== 00:17:12.061 Controller Capabilities/Features 00:17:12.061 ================================ 00:17:12.061 Vendor ID: 1b36 00:17:12.061 Subsystem Vendor ID: 1af4 00:17:12.061 Serial Number: 12343 00:17:12.061 Model Number: QEMU NVMe Ctrl 00:17:12.061 Firmware Version: 8.0.0 00:17:12.061 Recommended Arb Burst: 6 00:17:12.061 IEEE OUI Identifier: 00 54 52 00:17:12.061 Multi-path I/O 00:17:12.061 May have multiple subsystem ports: No 00:17:12.061 May have multiple controllers: Yes 00:17:12.061 Associated with SR-IOV VF: No 00:17:12.061 Max Data Transfer Size: 524288 00:17:12.061 Max Number of Namespaces: 256 00:17:12.061 Max Number of I/O Queues: 64 00:17:12.061 NVMe Specification Version (VS): 1.4 00:17:12.061 NVMe Specification Version (Identify): 1.4 00:17:12.061 Maximum Queue Entries: 2048 00:17:12.061 Contiguous Queues Required: Yes 00:17:12.061 Arbitration Mechanisms Supported 00:17:12.061 Weighted Round Robin: Not Supported 00:17:12.061 Vendor Specific: Not Supported 00:17:12.061 Reset Timeout: 7500 ms 00:17:12.061 Doorbell Stride: 4 bytes 00:17:12.061 NVM Subsystem Reset: Not Supported 00:17:12.061 Command Sets Supported 00:17:12.061 NVM Command Set: Supported 00:17:12.061 Boot Partition: Not Supported 00:17:12.061 Memory Page Size Minimum: 4096 bytes 00:17:12.061 Memory Page Size Maximum: 65536 bytes 00:17:12.061 Persistent Memory Region: Not Supported 00:17:12.061 Optional Asynchronous Events Supported 00:17:12.061 Namespace Attribute Notices: Supported 00:17:12.061 Firmware Activation Notices: Not Supported 00:17:12.061 ANA Change Notices: Not Supported 00:17:12.061 PLE Aggregate Log Change Notices: Not Supported 00:17:12.061 LBA Status Info Alert Notices: Not Supported 00:17:12.061 EGE Aggregate Log Change Notices: Not Supported 00:17:12.061 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.061 Zone Descriptor Change Notices: Not Supported 00:17:12.061 Discovery Log Change Notices: Not Supported 00:17:12.061 Controller Attributes 00:17:12.061 128-bit Host Identifier: Not Supported 00:17:12.061 Non-Operational Permissive Mode: Not Supported 00:17:12.061 NVM Sets: Not Supported 00:17:12.061 Read Recovery Levels: Not Supported 00:17:12.061 Endurance Groups: Supported 00:17:12.061 Predictable Latency Mode: Not Supported 00:17:12.061 Traffic Based Keep ALive: Not Supported 00:17:12.061 Namespace Granularity: Not Supported 00:17:12.061 SQ Associations: Not Supported 00:17:12.061 UUID List: Not Supported 00:17:12.061 Multi-Domain Subsystem: Not Supported 00:17:12.061 Fixed Capacity Management: Not Supported 00:17:12.061 Variable Capacity Management: Not Supported 00:17:12.061 Delete Endurance Group: Not Supported 00:17:12.061 Delete NVM Set: Not Supported 00:17:12.061 Extended LBA Formats Supported: Supported 00:17:12.061 Flexible Data Placement Supported: Supported 00:17:12.061 00:17:12.061 Controller Memory Buffer Support 00:17:12.061 ================================ 00:17:12.061 Supported: No 00:17:12.061 00:17:12.061 Persistent Memory Region Support 00:17:12.061 ================================ 00:17:12.061 Supported: No 00:17:12.061 00:17:12.061 Admin Command Set Attributes 00:17:12.061 ============================ 00:17:12.061 Security Send/Receive: Not Supported 00:17:12.061 Format NVM: Supported 00:17:12.061 Firmware Activate/Download: Not Supported 00:17:12.061 Namespace Management: Supported 00:17:12.061 Device Self-Test: Not Supported 00:17:12.061 Directives: Supported 00:17:12.061 NVMe-MI: Not Supported 00:17:12.061 Virtualization Management: Not Supported 00:17:12.061 Doorbell Buffer Config: Supported 00:17:12.061 Get LBA Status Capability: Not Supported 00:17:12.061 Command & Feature Lockdown Capability: Not Supported 00:17:12.061 Abort Command Limit: 4 00:17:12.061 Async Event Request Limit: 4 00:17:12.061 Number of Firmware Slots: N/A 00:17:12.061 Firmware Slot 1 Read-Only: N/A 00:17:12.061 Firmware Activation Without Reset: N/A 00:17:12.061 Multiple Update Detection Support: N/A 00:17:12.061 Firmware Update Granularity: No Information Provided 00:17:12.061 Per-Namespace SMART Log: Yes 00:17:12.061 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.061 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:17:12.061 Command Effects Log Page: Supported 00:17:12.061 Get Log Page Extended Data: Supported 00:17:12.061 Telemetry Log Pages: Not Supported 00:17:12.061 Persistent Event Log Pages: Not Supported 00:17:12.061 Supported Log Pages Log Page: May Support 00:17:12.061 Commands Supported & Effects Log Page: Not Supported 00:17:12.061 Feature Identifiers & Effects Log Page:May Support 00:17:12.061 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.061 Data Area 4 for Telemetry Log: Not Supported 00:17:12.061 Error Log Page Entries Supported: 1 00:17:12.061 Keep Alive: Not Supported 00:17:12.061 00:17:12.061 NVM Command Set Attributes 00:17:12.061 ========================== 00:17:12.061 Submission Queue Entry Size 00:17:12.061 Max: 64 00:17:12.061 Min: 64 00:17:12.061 Completion Queue Entry Size 00:17:12.061 Max: 16 00:17:12.061 Min: 16 00:17:12.061 Number of Namespaces: 256 00:17:12.061 Compare Command: Supported 00:17:12.061 Write Uncorrectable Command: Not Supported 00:17:12.061 Dataset Management Command: Supported 00:17:12.061 Write Zeroes Command: Supported 00:17:12.061 Set Features Save Field: Supported 00:17:12.061 Reservations: Not Supported 00:17:12.061 Timestamp: Supported 00:17:12.061 Copy: Supported 00:17:12.061 Volatile Write Cache: Present 00:17:12.061 Atomic Write Unit (Normal): 1 00:17:12.061 Atomic Write Unit (PFail): 1 00:17:12.061 Atomic Compare & Write Unit: 1 00:17:12.061 Fused Compare & Write: Not Supported 00:17:12.061 Scatter-Gather List 00:17:12.061 SGL Command Set: Supported 00:17:12.062 SGL Keyed: Not Supported 00:17:12.062 SGL Bit Bucket Descriptor: Not Supported 00:17:12.062 SGL Metadata Pointer: Not Supported 00:17:12.062 Oversized SGL: Not Supported 00:17:12.062 SGL Metadata Address: Not Supported 00:17:12.062 SGL Offset: Not Supported 00:17:12.062 Transport SGL Data Block: Not Supported 00:17:12.062 Replay Protected Memory Block: Not Supported 00:17:12.062 00:17:12.062 Firmware Slot Information 00:17:12.062 ========================= 00:17:12.062 Active slot: 1 00:17:12.062 Slot 1 Firmware Revision: 1.0 00:17:12.062 00:17:12.062 00:17:12.062 Commands Supported and Effects 00:17:12.062 ============================== 00:17:12.062 Admin Commands 00:17:12.062 -------------- 00:17:12.062 Delete I/O Submission Queue (00h): Supported 00:17:12.062 Create I/O Submission Queue (01h): Supported 00:17:12.062 Get Log Page (02h): Supported 00:17:12.062 Delete I/O Completion Queue (04h): Supported 00:17:12.062 Create I/O Completion Queue (05h): Supported 00:17:12.062 Identify (06h): Supported 00:17:12.062 Abort (08h): Supported 00:17:12.062 Set Features (09h): Supported 00:17:12.062 Get Features (0Ah): Supported 00:17:12.062 Asynchronous Event Request (0Ch): Supported 00:17:12.062 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.062 Directive Send (19h): Supported 00:17:12.062 Directive Receive (1Ah): Supported 00:17:12.062 Virtualization Management (1Ch): Supported 00:17:12.062 Doorbell Buffer Config (7Ch): Supported 00:17:12.062 Format NVM (80h): Supported LBA-Change 00:17:12.062 I/O Commands 00:17:12.062 ------------ 00:17:12.062 Flush (00h): Supported LBA-Change 00:17:12.062 Write (01h): Supported LBA-Change 00:17:12.062 Read (02h): Supported 00:17:12.062 Compare (05h): Supported 00:17:12.062 Write Zeroes (08h): Supported LBA-Change 00:17:12.062 Dataset Management (09h): Supported LBA-Change 00:17:12.062 Unknown (0Ch): Supported 00:17:12.062 Unknown (12h): Supported 00:17:12.062 Copy (19h): Supported LBA-Change 00:17:12.062 Unknown (1Dh): Supported LBA-Change 00:17:12.062 00:17:12.062 Error Log 00:17:12.062 ========= 00:17:12.062 00:17:12.062 Arbitration 00:17:12.062 =========== 00:17:12.062 Arbitration Burst: no limit 00:17:12.062 00:17:12.062 Power Management 00:17:12.062 ================ 00:17:12.062 Number of Power States: 1 00:17:12.062 Current Power State: Power State #0 00:17:12.062 Power State #0: 00:17:12.062 Max Power: 25.00 W 00:17:12.062 Non-Operational State: Operational 00:17:12.062 Entry Latency: 16 microseconds 00:17:12.062 Exit Latency: 4 microseconds 00:17:12.062 Relative Read Throughput: 0 00:17:12.062 Relative Read Latency: 0 00:17:12.062 Relative Write Throughput: 0 00:17:12.062 Relative Write Latency: 0 00:17:12.062 Idle Power: Not Reported 00:17:12.062 Active Power: Not Reported 00:17:12.062 Non-Operational Permissive Mode: Not Supported 00:17:12.062 00:17:12.062 Health Information 00:17:12.062 ================== 00:17:12.062 Critical Warnings: 00:17:12.062 Available Spare Space: OK 00:17:12.062 Temperature: OK 00:17:12.062 Device Reliability: OK 00:17:12.062 Read Only: No 00:17:12.062 Volatile Memory Backup: OK 00:17:12.062 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.062 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.062 Available Spare: 0% 00:17:12.062 Available Spare Threshold: 0% 00:17:12.062 Life Percentage Used: 0% 00:17:12.062 Data Units Read: 837 00:17:12.062 Data Units Written: 730 00:17:12.062 Host Read Commands: 37229 00:17:12.062 Host Write Commands: 35819 00:17:12.062 Controller Busy Time: 0 minutes 00:17:12.062 Power Cycles: 0 00:17:12.062 Power On Hours: 0 hours 00:17:12.062 Unsafe Shutdowns: 0 00:17:12.062 Unrecoverable Media Errors: 0 00:17:12.062 Lifetime Error Log Entries: 0 00:17:12.062 Warning Temperature Time: 0 minutes 00:17:12.062 Critical Temperature Time: 0 minutes 00:17:12.062 00:17:12.062 Number of Queues 00:17:12.062 ================ 00:17:12.062 Number of I/O Submission Queues: 64 00:17:12.062 Number of I/O Completion Queues: 64 00:17:12.062 00:17:12.062 ZNS Specific Controller Data 00:17:12.062 ============================ 00:17:12.062 Zone Append Size Limit: 0 00:17:12.062 00:17:12.062 00:17:12.062 Active Namespaces 00:17:12.062 ================= 00:17:12.062 Namespace ID:1 00:17:12.062 Error Recovery Timeout: Unlimited 00:17:12.062 Command Set Identifier: NVM (00h) 00:17:12.062 Deallocate: Supported 00:17:12.062 Deallocated/Unwritten Error: Supported 00:17:12.062 Deallocated Read Value: All 0x00 00:17:12.062 Deallocate in Write Zeroes: Not Supported 00:17:12.062 Deallocated Guard Field: 0xFFFF 00:17:12.062 Flush: Supported 00:17:12.062 Reservation: Not Supported 00:17:12.062 Namespace Sharing Capabilities: Multiple Controllers 00:17:12.062 Size (in LBAs): 262144 (1GiB) 00:17:12.062 Capacity (in LBAs): 262144 (1GiB) 00:17:12.062 Utilization (in LBAs): 262144 (1GiB) 00:17:12.062 Thin Provisioning: Not Supported 00:17:12.062 Per-NS Atomic Units: No 00:17:12.062 Maximum Single Source Range Length: 128 00:17:12.062 Maximum Copy Length: 128 00:17:12.062 Maximum Source Range Count: 128 00:17:12.062 NGUID/EUI64 Never Reused: No 00:17:12.062 Namespace Write Protected: No 00:17:12.062 Endurance group ID: 1 00:17:12.062 Number of LBA Formats: 8 00:17:12.062 Current LBA Format: LBA Format #04 00:17:12.062 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.062 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.062 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.062 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.062 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.062 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.062 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.062 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.062 00:17:12.062 Get Feature FDP: 00:17:12.062 ================ 00:17:12.062 Enabled: Yes 00:17:12.062 FDP configuration index: 0 00:17:12.062 00:17:12.062 FDP configurations log page 00:17:12.062 =========================== 00:17:12.062 Number of FDP configurations: 1 00:17:12.062 Version: 0 00:17:12.062 Size: 112 00:17:12.062 FDP Configuration Descriptor: 0 00:17:12.062 Descriptor Size: 96 00:17:12.062 Reclaim Group Identifier format: 2 00:17:12.062 FDP Volatile Write Cache: Not Present 00:17:12.062 FDP Configuration: Valid 00:17:12.062 Vendor Specific Size: 0 00:17:12.062 Number of Reclaim Groups: 2 00:17:12.062 Number of Recalim Unit Handles: 8 00:17:12.062 Max Placement Identifiers: 128 00:17:12.062 Number of Namespaces Suppprted: 256 00:17:12.062 Reclaim unit Nominal Size: 6000000 bytes 00:17:12.062 Estimated Reclaim Unit Time Limit: Not Reported 00:17:12.062 RUH Desc #000: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #001: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #002: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #003: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #004: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #005: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #006: RUH Type: Initially Isolated 00:17:12.062 RUH Desc #007: RUH Type: Initially Isolated 00:17:12.062 00:17:12.062 FDP reclaim unit handle usage log page 00:17:12.062 ====================================== 00:17:12.062 Number of Reclaim Unit Handles: 8 00:17:12.062 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:17:12.062 RUH Usage Desc #001: RUH Attributes: Unused 00:17:12.062 RUH Usage Desc #002: RUH Attributes: Unused 00:17:12.062 RUH Usage Desc #003: RUH Attributes: Unused 00:17:12.062 RUH Usage Desc #004: RUH Attributes: Unused 00:17:12.062 RUH Usage Desc #005: RUH Attributes: Unused 00:17:12.062 RUH Usage Desc #006: RUH Attributes: Unused 00:17:12.062 RUH Usage Desc #007: RUH Attributes: Unused 00:17:12.062 00:17:12.062 FDP statistics log page 00:17:12.062 ======================= 00:17:12.062 Host bytes with metadata written: 472424448 00:17:12.062 Medi[2024-07-25 09:31:12.644503] nvme_ctrlr.c:3608:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 68579 terminated unexpected 00:17:12.062 a bytes with metadata written: 472477696 00:17:12.062 Media bytes erased: 0 00:17:12.062 00:17:12.062 FDP events log page 00:17:12.062 =================== 00:17:12.062 Number of FDP events: 0 00:17:12.062 00:17:12.062 NVM Specific Namespace Data 00:17:12.062 =========================== 00:17:12.062 Logical Block Storage Tag Mask: 0 00:17:12.062 Protection Information Capabilities: 00:17:12.062 16b Guard Protection Information Storage Tag Support: No 00:17:12.062 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.062 Storage Tag Check Read Support: No 00:17:12.063 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.063 ===================================================== 00:17:12.063 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:12.063 ===================================================== 00:17:12.063 Controller Capabilities/Features 00:17:12.063 ================================ 00:17:12.063 Vendor ID: 1b36 00:17:12.063 Subsystem Vendor ID: 1af4 00:17:12.063 Serial Number: 12342 00:17:12.063 Model Number: QEMU NVMe Ctrl 00:17:12.063 Firmware Version: 8.0.0 00:17:12.063 Recommended Arb Burst: 6 00:17:12.063 IEEE OUI Identifier: 00 54 52 00:17:12.063 Multi-path I/O 00:17:12.063 May have multiple subsystem ports: No 00:17:12.063 May have multiple controllers: No 00:17:12.063 Associated with SR-IOV VF: No 00:17:12.063 Max Data Transfer Size: 524288 00:17:12.063 Max Number of Namespaces: 256 00:17:12.063 Max Number of I/O Queues: 64 00:17:12.063 NVMe Specification Version (VS): 1.4 00:17:12.063 NVMe Specification Version (Identify): 1.4 00:17:12.063 Maximum Queue Entries: 2048 00:17:12.063 Contiguous Queues Required: Yes 00:17:12.063 Arbitration Mechanisms Supported 00:17:12.063 Weighted Round Robin: Not Supported 00:17:12.063 Vendor Specific: Not Supported 00:17:12.063 Reset Timeout: 7500 ms 00:17:12.063 Doorbell Stride: 4 bytes 00:17:12.063 NVM Subsystem Reset: Not Supported 00:17:12.063 Command Sets Supported 00:17:12.063 NVM Command Set: Supported 00:17:12.063 Boot Partition: Not Supported 00:17:12.063 Memory Page Size Minimum: 4096 bytes 00:17:12.063 Memory Page Size Maximum: 65536 bytes 00:17:12.063 Persistent Memory Region: Not Supported 00:17:12.063 Optional Asynchronous Events Supported 00:17:12.063 Namespace Attribute Notices: Supported 00:17:12.063 Firmware Activation Notices: Not Supported 00:17:12.063 ANA Change Notices: Not Supported 00:17:12.063 PLE Aggregate Log Change Notices: Not Supported 00:17:12.063 LBA Status Info Alert Notices: Not Supported 00:17:12.063 EGE Aggregate Log Change Notices: Not Supported 00:17:12.063 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.063 Zone Descriptor Change Notices: Not Supported 00:17:12.063 Discovery Log Change Notices: Not Supported 00:17:12.063 Controller Attributes 00:17:12.063 128-bit Host Identifier: Not Supported 00:17:12.063 Non-Operational Permissive Mode: Not Supported 00:17:12.063 NVM Sets: Not Supported 00:17:12.063 Read Recovery Levels: Not Supported 00:17:12.063 Endurance Groups: Not Supported 00:17:12.063 Predictable Latency Mode: Not Supported 00:17:12.063 Traffic Based Keep ALive: Not Supported 00:17:12.063 Namespace Granularity: Not Supported 00:17:12.063 SQ Associations: Not Supported 00:17:12.063 UUID List: Not Supported 00:17:12.063 Multi-Domain Subsystem: Not Supported 00:17:12.063 Fixed Capacity Management: Not Supported 00:17:12.063 Variable Capacity Management: Not Supported 00:17:12.063 Delete Endurance Group: Not Supported 00:17:12.063 Delete NVM Set: Not Supported 00:17:12.063 Extended LBA Formats Supported: Supported 00:17:12.063 Flexible Data Placement Supported: Not Supported 00:17:12.063 00:17:12.063 Controller Memory Buffer Support 00:17:12.063 ================================ 00:17:12.063 Supported: No 00:17:12.063 00:17:12.063 Persistent Memory Region Support 00:17:12.063 ================================ 00:17:12.063 Supported: No 00:17:12.063 00:17:12.063 Admin Command Set Attributes 00:17:12.063 ============================ 00:17:12.063 Security Send/Receive: Not Supported 00:17:12.063 Format NVM: Supported 00:17:12.063 Firmware Activate/Download: Not Supported 00:17:12.063 Namespace Management: Supported 00:17:12.063 Device Self-Test: Not Supported 00:17:12.063 Directives: Supported 00:17:12.063 NVMe-MI: Not Supported 00:17:12.063 Virtualization Management: Not Supported 00:17:12.063 Doorbell Buffer Config: Supported 00:17:12.063 Get LBA Status Capability: Not Supported 00:17:12.063 Command & Feature Lockdown Capability: Not Supported 00:17:12.063 Abort Command Limit: 4 00:17:12.063 Async Event Request Limit: 4 00:17:12.063 Number of Firmware Slots: N/A 00:17:12.063 Firmware Slot 1 Read-Only: N/A 00:17:12.063 Firmware Activation Without Reset: N/A 00:17:12.063 Multiple Update Detection Support: N/A 00:17:12.063 Firmware Update Granularity: No Information Provided 00:17:12.063 Per-Namespace SMART Log: Yes 00:17:12.063 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.063 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:17:12.063 Command Effects Log Page: Supported 00:17:12.063 Get Log Page Extended Data: Supported 00:17:12.063 Telemetry Log Pages: Not Supported 00:17:12.063 Persistent Event Log Pages: Not Supported 00:17:12.063 Supported Log Pages Log Page: May Support 00:17:12.063 Commands Supported & Effects Log Page: Not Supported 00:17:12.063 Feature Identifiers & Effects Log Page:May Support 00:17:12.063 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.063 Data Area 4 for Telemetry Log: Not Supported 00:17:12.063 Error Log Page Entries Supported: 1 00:17:12.063 Keep Alive: Not Supported 00:17:12.063 00:17:12.063 NVM Command Set Attributes 00:17:12.063 ========================== 00:17:12.063 Submission Queue Entry Size 00:17:12.063 Max: 64 00:17:12.063 Min: 64 00:17:12.063 Completion Queue Entry Size 00:17:12.063 Max: 16 00:17:12.063 Min: 16 00:17:12.063 Number of Namespaces: 256 00:17:12.063 Compare Command: Supported 00:17:12.063 Write Uncorrectable Command: Not Supported 00:17:12.063 Dataset Management Command: Supported 00:17:12.063 Write Zeroes Command: Supported 00:17:12.063 Set Features Save Field: Supported 00:17:12.063 Reservations: Not Supported 00:17:12.063 Timestamp: Supported 00:17:12.063 Copy: Supported 00:17:12.063 Volatile Write Cache: Present 00:17:12.063 Atomic Write Unit (Normal): 1 00:17:12.063 Atomic Write Unit (PFail): 1 00:17:12.063 Atomic Compare & Write Unit: 1 00:17:12.063 Fused Compare & Write: Not Supported 00:17:12.063 Scatter-Gather List 00:17:12.063 SGL Command Set: Supported 00:17:12.063 SGL Keyed: Not Supported 00:17:12.063 SGL Bit Bucket Descriptor: Not Supported 00:17:12.063 SGL Metadata Pointer: Not Supported 00:17:12.063 Oversized SGL: Not Supported 00:17:12.063 SGL Metadata Address: Not Supported 00:17:12.063 SGL Offset: Not Supported 00:17:12.063 Transport SGL Data Block: Not Supported 00:17:12.063 Replay Protected Memory Block: Not Supported 00:17:12.063 00:17:12.063 Firmware Slot Information 00:17:12.063 ========================= 00:17:12.064 Active slot: 1 00:17:12.064 Slot 1 Firmware Revision: 1.0 00:17:12.064 00:17:12.064 00:17:12.064 Commands Supported and Effects 00:17:12.064 ============================== 00:17:12.064 Admin Commands 00:17:12.064 -------------- 00:17:12.064 Delete I/O Submission Queue (00h): Supported 00:17:12.064 Create I/O Submission Queue (01h): Supported 00:17:12.064 Get Log Page (02h): Supported 00:17:12.064 Delete I/O Completion Queue (04h): Supported 00:17:12.064 Create I/O Completion Queue (05h): Supported 00:17:12.064 Identify (06h): Supported 00:17:12.064 Abort (08h): Supported 00:17:12.064 Set Features (09h): Supported 00:17:12.064 Get Features (0Ah): Supported 00:17:12.064 Asynchronous Event Request (0Ch): Supported 00:17:12.064 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.064 Directive Send (19h): Supported 00:17:12.064 Directive Receive (1Ah): Supported 00:17:12.064 Virtualization Management (1Ch): Supported 00:17:12.064 Doorbell Buffer Config (7Ch): Supported 00:17:12.064 Format NVM (80h): Supported LBA-Change 00:17:12.064 I/O Commands 00:17:12.064 ------------ 00:17:12.064 Flush (00h): Supported LBA-Change 00:17:12.064 Write (01h): Supported LBA-Change 00:17:12.064 Read (02h): Supported 00:17:12.064 Compare (05h): Supported 00:17:12.064 Write Zeroes (08h): Supported LBA-Change 00:17:12.064 Dataset Management (09h): Supported LBA-Change 00:17:12.064 Unknown (0Ch): Supported 00:17:12.064 Unknown (12h): Supported 00:17:12.064 Copy (19h): Supported LBA-Change 00:17:12.064 Unknown (1Dh): Supported LBA-Change 00:17:12.064 00:17:12.064 Error Log 00:17:12.064 ========= 00:17:12.064 00:17:12.064 Arbitration 00:17:12.064 =========== 00:17:12.064 Arbitration Burst: no limit 00:17:12.064 00:17:12.064 Power Management 00:17:12.064 ================ 00:17:12.064 Number of Power States: 1 00:17:12.064 Current Power State: Power State #0 00:17:12.064 Power State #0: 00:17:12.064 Max Power: 25.00 W 00:17:12.064 Non-Operational State: Operational 00:17:12.064 Entry Latency: 16 microseconds 00:17:12.064 Exit Latency: 4 microseconds 00:17:12.064 Relative Read Throughput: 0 00:17:12.064 Relative Read Latency: 0 00:17:12.064 Relative Write Throughput: 0 00:17:12.064 Relative Write Latency: 0 00:17:12.064 Idle Power: Not Reported 00:17:12.064 Active Power: Not Reported 00:17:12.064 Non-Operational Permissive Mode: Not Supported 00:17:12.064 00:17:12.064 Health Information 00:17:12.064 ================== 00:17:12.064 Critical Warnings: 00:17:12.064 Available Spare Space: OK 00:17:12.064 Temperature: OK 00:17:12.064 Device Reliability: OK 00:17:12.064 Read Only: No 00:17:12.064 Volatile Memory Backup: OK 00:17:12.064 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.064 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.064 Available Spare: 0% 00:17:12.064 Available Spare Threshold: 0% 00:17:12.064 Life Percentage Used: 0% 00:17:12.064 Data Units Read: 2401 00:17:12.064 Data Units Written: 2081 00:17:12.064 Host Read Commands: 110722 00:17:12.064 Host Write Commands: 106495 00:17:12.064 Controller Busy Time: 0 minutes 00:17:12.064 Power Cycles: 0 00:17:12.064 Power On Hours: 0 hours 00:17:12.064 Unsafe Shutdowns: 0 00:17:12.064 Unrecoverable Media Errors: 0 00:17:12.064 Lifetime Error Log Entries: 0 00:17:12.064 Warning Temperature Time: 0 minutes 00:17:12.064 Critical Temperature Time: 0 minutes 00:17:12.064 00:17:12.064 Number of Queues 00:17:12.064 ================ 00:17:12.064 Number of I/O Submission Queues: 64 00:17:12.064 Number of I/O Completion Queues: 64 00:17:12.064 00:17:12.064 ZNS Specific Controller Data 00:17:12.064 ============================ 00:17:12.064 Zone Append Size Limit: 0 00:17:12.064 00:17:12.064 00:17:12.064 Active Namespaces 00:17:12.064 ================= 00:17:12.064 Namespace ID:1 00:17:12.064 Error Recovery Timeout: Unlimited 00:17:12.064 Command Set Identifier: NVM (00h) 00:17:12.064 Deallocate: Supported 00:17:12.064 Deallocated/Unwritten Error: Supported 00:17:12.064 Deallocated Read Value: All 0x00 00:17:12.064 Deallocate in Write Zeroes: Not Supported 00:17:12.064 Deallocated Guard Field: 0xFFFF 00:17:12.064 Flush: Supported 00:17:12.064 Reservation: Not Supported 00:17:12.064 Namespace Sharing Capabilities: Private 00:17:12.064 Size (in LBAs): 1048576 (4GiB) 00:17:12.064 Capacity (in LBAs): 1048576 (4GiB) 00:17:12.064 Utilization (in LBAs): 1048576 (4GiB) 00:17:12.064 Thin Provisioning: Not Supported 00:17:12.064 Per-NS Atomic Units: No 00:17:12.064 Maximum Single Source Range Length: 128 00:17:12.064 Maximum Copy Length: 128 00:17:12.064 Maximum Source Range Count: 128 00:17:12.064 NGUID/EUI64 Never Reused: No 00:17:12.064 Namespace Write Protected: No 00:17:12.064 Number of LBA Formats: 8 00:17:12.064 Current LBA Format: LBA Format #04 00:17:12.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.064 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.064 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.064 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.064 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.064 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.064 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.064 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.064 00:17:12.064 NVM Specific Namespace Data 00:17:12.064 =========================== 00:17:12.064 Logical Block Storage Tag Mask: 0 00:17:12.064 Protection Information Capabilities: 00:17:12.064 16b Guard Protection Information Storage Tag Support: No 00:17:12.064 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.064 Storage Tag Check Read Support: No 00:17:12.064 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Namespace ID:2 00:17:12.064 Error Recovery Timeout: Unlimited 00:17:12.064 Command Set Identifier: NVM (00h) 00:17:12.064 Deallocate: Supported 00:17:12.064 Deallocated/Unwritten Error: Supported 00:17:12.064 Deallocated Read Value: All 0x00 00:17:12.064 Deallocate in Write Zeroes: Not Supported 00:17:12.064 Deallocated Guard Field: 0xFFFF 00:17:12.064 Flush: Supported 00:17:12.064 Reservation: Not Supported 00:17:12.064 Namespace Sharing Capabilities: Private 00:17:12.064 Size (in LBAs): 1048576 (4GiB) 00:17:12.064 Capacity (in LBAs): 1048576 (4GiB) 00:17:12.064 Utilization (in LBAs): 1048576 (4GiB) 00:17:12.064 Thin Provisioning: Not Supported 00:17:12.064 Per-NS Atomic Units: No 00:17:12.064 Maximum Single Source Range Length: 128 00:17:12.064 Maximum Copy Length: 128 00:17:12.064 Maximum Source Range Count: 128 00:17:12.064 NGUID/EUI64 Never Reused: No 00:17:12.064 Namespace Write Protected: No 00:17:12.064 Number of LBA Formats: 8 00:17:12.064 Current LBA Format: LBA Format #04 00:17:12.064 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.064 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.064 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.064 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.064 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.064 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.064 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.064 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.064 00:17:12.064 NVM Specific Namespace Data 00:17:12.064 =========================== 00:17:12.064 Logical Block Storage Tag Mask: 0 00:17:12.064 Protection Information Capabilities: 00:17:12.064 16b Guard Protection Information Storage Tag Support: No 00:17:12.064 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.064 Storage Tag Check Read Support: No 00:17:12.064 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.064 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.065 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.065 Namespace ID:3 00:17:12.065 Error Recovery Timeout: Unlimited 00:17:12.065 Command Set Identifier: NVM (00h) 00:17:12.065 Deallocate: Supported 00:17:12.065 Deallocated/Unwritten Error: Supported 00:17:12.065 Deallocated Read Value: All 0x00 00:17:12.065 Deallocate in Write Zeroes: Not Supported 00:17:12.065 Deallocated Guard Field: 0xFFFF 00:17:12.065 Flush: Supported 00:17:12.065 Reservation: Not Supported 00:17:12.065 Namespace Sharing Capabilities: Private 00:17:12.065 Size (in LBAs): 1048576 (4GiB) 00:17:12.327 Capacity (in LBAs): 1048576 (4GiB) 00:17:12.327 Utilization (in LBAs): 1048576 (4GiB) 00:17:12.327 Thin Provisioning: Not Supported 00:17:12.327 Per-NS Atomic Units: No 00:17:12.327 Maximum Single Source Range Length: 128 00:17:12.327 Maximum Copy Length: 128 00:17:12.327 Maximum Source Range Count: 128 00:17:12.327 NGUID/EUI64 Never Reused: No 00:17:12.327 Namespace Write Protected: No 00:17:12.327 Number of LBA Formats: 8 00:17:12.327 Current LBA Format: LBA Format #04 00:17:12.327 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.327 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.327 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.327 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.327 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.327 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.327 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.327 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.327 00:17:12.327 NVM Specific Namespace Data 00:17:12.327 =========================== 00:17:12.327 Logical Block Storage Tag Mask: 0 00:17:12.327 Protection Information Capabilities: 00:17:12.327 16b Guard Protection Information Storage Tag Support: No 00:17:12.327 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.327 Storage Tag Check Read Support: No 00:17:12.327 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.327 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:17:12.327 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:17:12.327 ===================================================== 00:17:12.327 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:12.327 ===================================================== 00:17:12.327 Controller Capabilities/Features 00:17:12.327 ================================ 00:17:12.327 Vendor ID: 1b36 00:17:12.327 Subsystem Vendor ID: 1af4 00:17:12.327 Serial Number: 12340 00:17:12.327 Model Number: QEMU NVMe Ctrl 00:17:12.327 Firmware Version: 8.0.0 00:17:12.327 Recommended Arb Burst: 6 00:17:12.327 IEEE OUI Identifier: 00 54 52 00:17:12.327 Multi-path I/O 00:17:12.327 May have multiple subsystem ports: No 00:17:12.327 May have multiple controllers: No 00:17:12.327 Associated with SR-IOV VF: No 00:17:12.327 Max Data Transfer Size: 524288 00:17:12.327 Max Number of Namespaces: 256 00:17:12.327 Max Number of I/O Queues: 64 00:17:12.327 NVMe Specification Version (VS): 1.4 00:17:12.327 NVMe Specification Version (Identify): 1.4 00:17:12.327 Maximum Queue Entries: 2048 00:17:12.327 Contiguous Queues Required: Yes 00:17:12.327 Arbitration Mechanisms Supported 00:17:12.327 Weighted Round Robin: Not Supported 00:17:12.327 Vendor Specific: Not Supported 00:17:12.327 Reset Timeout: 7500 ms 00:17:12.327 Doorbell Stride: 4 bytes 00:17:12.327 NVM Subsystem Reset: Not Supported 00:17:12.327 Command Sets Supported 00:17:12.327 NVM Command Set: Supported 00:17:12.327 Boot Partition: Not Supported 00:17:12.327 Memory Page Size Minimum: 4096 bytes 00:17:12.327 Memory Page Size Maximum: 65536 bytes 00:17:12.327 Persistent Memory Region: Not Supported 00:17:12.327 Optional Asynchronous Events Supported 00:17:12.327 Namespace Attribute Notices: Supported 00:17:12.327 Firmware Activation Notices: Not Supported 00:17:12.327 ANA Change Notices: Not Supported 00:17:12.327 PLE Aggregate Log Change Notices: Not Supported 00:17:12.327 LBA Status Info Alert Notices: Not Supported 00:17:12.327 EGE Aggregate Log Change Notices: Not Supported 00:17:12.327 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.327 Zone Descriptor Change Notices: Not Supported 00:17:12.327 Discovery Log Change Notices: Not Supported 00:17:12.327 Controller Attributes 00:17:12.327 128-bit Host Identifier: Not Supported 00:17:12.327 Non-Operational Permissive Mode: Not Supported 00:17:12.327 NVM Sets: Not Supported 00:17:12.327 Read Recovery Levels: Not Supported 00:17:12.327 Endurance Groups: Not Supported 00:17:12.327 Predictable Latency Mode: Not Supported 00:17:12.327 Traffic Based Keep ALive: Not Supported 00:17:12.327 Namespace Granularity: Not Supported 00:17:12.327 SQ Associations: Not Supported 00:17:12.327 UUID List: Not Supported 00:17:12.327 Multi-Domain Subsystem: Not Supported 00:17:12.327 Fixed Capacity Management: Not Supported 00:17:12.327 Variable Capacity Management: Not Supported 00:17:12.327 Delete Endurance Group: Not Supported 00:17:12.327 Delete NVM Set: Not Supported 00:17:12.327 Extended LBA Formats Supported: Supported 00:17:12.327 Flexible Data Placement Supported: Not Supported 00:17:12.327 00:17:12.327 Controller Memory Buffer Support 00:17:12.327 ================================ 00:17:12.327 Supported: No 00:17:12.327 00:17:12.327 Persistent Memory Region Support 00:17:12.327 ================================ 00:17:12.327 Supported: No 00:17:12.327 00:17:12.327 Admin Command Set Attributes 00:17:12.327 ============================ 00:17:12.327 Security Send/Receive: Not Supported 00:17:12.327 Format NVM: Supported 00:17:12.327 Firmware Activate/Download: Not Supported 00:17:12.327 Namespace Management: Supported 00:17:12.327 Device Self-Test: Not Supported 00:17:12.327 Directives: Supported 00:17:12.327 NVMe-MI: Not Supported 00:17:12.327 Virtualization Management: Not Supported 00:17:12.327 Doorbell Buffer Config: Supported 00:17:12.327 Get LBA Status Capability: Not Supported 00:17:12.327 Command & Feature Lockdown Capability: Not Supported 00:17:12.327 Abort Command Limit: 4 00:17:12.327 Async Event Request Limit: 4 00:17:12.327 Number of Firmware Slots: N/A 00:17:12.327 Firmware Slot 1 Read-Only: N/A 00:17:12.327 Firmware Activation Without Reset: N/A 00:17:12.327 Multiple Update Detection Support: N/A 00:17:12.327 Firmware Update Granularity: No Information Provided 00:17:12.327 Per-Namespace SMART Log: Yes 00:17:12.327 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.327 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:17:12.327 Command Effects Log Page: Supported 00:17:12.327 Get Log Page Extended Data: Supported 00:17:12.327 Telemetry Log Pages: Not Supported 00:17:12.327 Persistent Event Log Pages: Not Supported 00:17:12.327 Supported Log Pages Log Page: May Support 00:17:12.327 Commands Supported & Effects Log Page: Not Supported 00:17:12.327 Feature Identifiers & Effects Log Page:May Support 00:17:12.327 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.327 Data Area 4 for Telemetry Log: Not Supported 00:17:12.327 Error Log Page Entries Supported: 1 00:17:12.327 Keep Alive: Not Supported 00:17:12.327 00:17:12.327 NVM Command Set Attributes 00:17:12.327 ========================== 00:17:12.327 Submission Queue Entry Size 00:17:12.327 Max: 64 00:17:12.327 Min: 64 00:17:12.327 Completion Queue Entry Size 00:17:12.327 Max: 16 00:17:12.327 Min: 16 00:17:12.327 Number of Namespaces: 256 00:17:12.327 Compare Command: Supported 00:17:12.327 Write Uncorrectable Command: Not Supported 00:17:12.327 Dataset Management Command: Supported 00:17:12.327 Write Zeroes Command: Supported 00:17:12.327 Set Features Save Field: Supported 00:17:12.327 Reservations: Not Supported 00:17:12.327 Timestamp: Supported 00:17:12.327 Copy: Supported 00:17:12.327 Volatile Write Cache: Present 00:17:12.327 Atomic Write Unit (Normal): 1 00:17:12.327 Atomic Write Unit (PFail): 1 00:17:12.327 Atomic Compare & Write Unit: 1 00:17:12.327 Fused Compare & Write: Not Supported 00:17:12.327 Scatter-Gather List 00:17:12.327 SGL Command Set: Supported 00:17:12.327 SGL Keyed: Not Supported 00:17:12.327 SGL Bit Bucket Descriptor: Not Supported 00:17:12.327 SGL Metadata Pointer: Not Supported 00:17:12.327 Oversized SGL: Not Supported 00:17:12.327 SGL Metadata Address: Not Supported 00:17:12.327 SGL Offset: Not Supported 00:17:12.327 Transport SGL Data Block: Not Supported 00:17:12.327 Replay Protected Memory Block: Not Supported 00:17:12.327 00:17:12.327 Firmware Slot Information 00:17:12.327 ========================= 00:17:12.327 Active slot: 1 00:17:12.328 Slot 1 Firmware Revision: 1.0 00:17:12.328 00:17:12.328 00:17:12.328 Commands Supported and Effects 00:17:12.328 ============================== 00:17:12.328 Admin Commands 00:17:12.328 -------------- 00:17:12.328 Delete I/O Submission Queue (00h): Supported 00:17:12.328 Create I/O Submission Queue (01h): Supported 00:17:12.328 Get Log Page (02h): Supported 00:17:12.328 Delete I/O Completion Queue (04h): Supported 00:17:12.328 Create I/O Completion Queue (05h): Supported 00:17:12.328 Identify (06h): Supported 00:17:12.328 Abort (08h): Supported 00:17:12.328 Set Features (09h): Supported 00:17:12.328 Get Features (0Ah): Supported 00:17:12.328 Asynchronous Event Request (0Ch): Supported 00:17:12.328 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.328 Directive Send (19h): Supported 00:17:12.328 Directive Receive (1Ah): Supported 00:17:12.328 Virtualization Management (1Ch): Supported 00:17:12.328 Doorbell Buffer Config (7Ch): Supported 00:17:12.328 Format NVM (80h): Supported LBA-Change 00:17:12.328 I/O Commands 00:17:12.328 ------------ 00:17:12.328 Flush (00h): Supported LBA-Change 00:17:12.328 Write (01h): Supported LBA-Change 00:17:12.328 Read (02h): Supported 00:17:12.328 Compare (05h): Supported 00:17:12.328 Write Zeroes (08h): Supported LBA-Change 00:17:12.328 Dataset Management (09h): Supported LBA-Change 00:17:12.328 Unknown (0Ch): Supported 00:17:12.328 Unknown (12h): Supported 00:17:12.328 Copy (19h): Supported LBA-Change 00:17:12.328 Unknown (1Dh): Supported LBA-Change 00:17:12.328 00:17:12.328 Error Log 00:17:12.328 ========= 00:17:12.328 00:17:12.328 Arbitration 00:17:12.328 =========== 00:17:12.328 Arbitration Burst: no limit 00:17:12.328 00:17:12.328 Power Management 00:17:12.328 ================ 00:17:12.328 Number of Power States: 1 00:17:12.328 Current Power State: Power State #0 00:17:12.328 Power State #0: 00:17:12.328 Max Power: 25.00 W 00:17:12.328 Non-Operational State: Operational 00:17:12.328 Entry Latency: 16 microseconds 00:17:12.328 Exit Latency: 4 microseconds 00:17:12.328 Relative Read Throughput: 0 00:17:12.328 Relative Read Latency: 0 00:17:12.328 Relative Write Throughput: 0 00:17:12.328 Relative Write Latency: 0 00:17:12.328 Idle Power: Not Reported 00:17:12.328 Active Power: Not Reported 00:17:12.328 Non-Operational Permissive Mode: Not Supported 00:17:12.328 00:17:12.328 Health Information 00:17:12.328 ================== 00:17:12.328 Critical Warnings: 00:17:12.328 Available Spare Space: OK 00:17:12.328 Temperature: OK 00:17:12.328 Device Reliability: OK 00:17:12.328 Read Only: No 00:17:12.328 Volatile Memory Backup: OK 00:17:12.328 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.328 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.328 Available Spare: 0% 00:17:12.328 Available Spare Threshold: 0% 00:17:12.328 Life Percentage Used: 0% 00:17:12.328 Data Units Read: 776 00:17:12.328 Data Units Written: 668 00:17:12.328 Host Read Commands: 36246 00:17:12.328 Host Write Commands: 35284 00:17:12.328 Controller Busy Time: 0 minutes 00:17:12.328 Power Cycles: 0 00:17:12.328 Power On Hours: 0 hours 00:17:12.328 Unsafe Shutdowns: 0 00:17:12.328 Unrecoverable Media Errors: 0 00:17:12.328 Lifetime Error Log Entries: 0 00:17:12.328 Warning Temperature Time: 0 minutes 00:17:12.328 Critical Temperature Time: 0 minutes 00:17:12.328 00:17:12.328 Number of Queues 00:17:12.328 ================ 00:17:12.328 Number of I/O Submission Queues: 64 00:17:12.328 Number of I/O Completion Queues: 64 00:17:12.328 00:17:12.328 ZNS Specific Controller Data 00:17:12.328 ============================ 00:17:12.328 Zone Append Size Limit: 0 00:17:12.328 00:17:12.328 00:17:12.328 Active Namespaces 00:17:12.328 ================= 00:17:12.328 Namespace ID:1 00:17:12.328 Error Recovery Timeout: Unlimited 00:17:12.328 Command Set Identifier: NVM (00h) 00:17:12.328 Deallocate: Supported 00:17:12.328 Deallocated/Unwritten Error: Supported 00:17:12.328 Deallocated Read Value: All 0x00 00:17:12.328 Deallocate in Write Zeroes: Not Supported 00:17:12.328 Deallocated Guard Field: 0xFFFF 00:17:12.328 Flush: Supported 00:17:12.328 Reservation: Not Supported 00:17:12.328 Metadata Transferred as: Separate Metadata Buffer 00:17:12.328 Namespace Sharing Capabilities: Private 00:17:12.328 Size (in LBAs): 1548666 (5GiB) 00:17:12.328 Capacity (in LBAs): 1548666 (5GiB) 00:17:12.328 Utilization (in LBAs): 1548666 (5GiB) 00:17:12.328 Thin Provisioning: Not Supported 00:17:12.328 Per-NS Atomic Units: No 00:17:12.328 Maximum Single Source Range Length: 128 00:17:12.328 Maximum Copy Length: 128 00:17:12.328 Maximum Source Range Count: 128 00:17:12.328 NGUID/EUI64 Never Reused: No 00:17:12.328 Namespace Write Protected: No 00:17:12.328 Number of LBA Formats: 8 00:17:12.328 Current LBA Format: LBA Format #07 00:17:12.328 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.328 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.328 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.328 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.328 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.328 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.328 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.328 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.328 00:17:12.328 NVM Specific Namespace Data 00:17:12.328 =========================== 00:17:12.328 Logical Block Storage Tag Mask: 0 00:17:12.328 Protection Information Capabilities: 00:17:12.328 16b Guard Protection Information Storage Tag Support: No 00:17:12.328 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.328 Storage Tag Check Read Support: No 00:17:12.328 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.328 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:17:12.328 09:31:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:17:12.589 ===================================================== 00:17:12.589 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:12.589 ===================================================== 00:17:12.589 Controller Capabilities/Features 00:17:12.589 ================================ 00:17:12.589 Vendor ID: 1b36 00:17:12.589 Subsystem Vendor ID: 1af4 00:17:12.589 Serial Number: 12341 00:17:12.589 Model Number: QEMU NVMe Ctrl 00:17:12.589 Firmware Version: 8.0.0 00:17:12.589 Recommended Arb Burst: 6 00:17:12.589 IEEE OUI Identifier: 00 54 52 00:17:12.589 Multi-path I/O 00:17:12.589 May have multiple subsystem ports: No 00:17:12.589 May have multiple controllers: No 00:17:12.589 Associated with SR-IOV VF: No 00:17:12.589 Max Data Transfer Size: 524288 00:17:12.589 Max Number of Namespaces: 256 00:17:12.589 Max Number of I/O Queues: 64 00:17:12.589 NVMe Specification Version (VS): 1.4 00:17:12.590 NVMe Specification Version (Identify): 1.4 00:17:12.590 Maximum Queue Entries: 2048 00:17:12.590 Contiguous Queues Required: Yes 00:17:12.590 Arbitration Mechanisms Supported 00:17:12.590 Weighted Round Robin: Not Supported 00:17:12.590 Vendor Specific: Not Supported 00:17:12.590 Reset Timeout: 7500 ms 00:17:12.590 Doorbell Stride: 4 bytes 00:17:12.590 NVM Subsystem Reset: Not Supported 00:17:12.590 Command Sets Supported 00:17:12.590 NVM Command Set: Supported 00:17:12.590 Boot Partition: Not Supported 00:17:12.590 Memory Page Size Minimum: 4096 bytes 00:17:12.590 Memory Page Size Maximum: 65536 bytes 00:17:12.590 Persistent Memory Region: Not Supported 00:17:12.590 Optional Asynchronous Events Supported 00:17:12.590 Namespace Attribute Notices: Supported 00:17:12.590 Firmware Activation Notices: Not Supported 00:17:12.590 ANA Change Notices: Not Supported 00:17:12.590 PLE Aggregate Log Change Notices: Not Supported 00:17:12.590 LBA Status Info Alert Notices: Not Supported 00:17:12.590 EGE Aggregate Log Change Notices: Not Supported 00:17:12.590 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.590 Zone Descriptor Change Notices: Not Supported 00:17:12.590 Discovery Log Change Notices: Not Supported 00:17:12.590 Controller Attributes 00:17:12.590 128-bit Host Identifier: Not Supported 00:17:12.590 Non-Operational Permissive Mode: Not Supported 00:17:12.590 NVM Sets: Not Supported 00:17:12.590 Read Recovery Levels: Not Supported 00:17:12.590 Endurance Groups: Not Supported 00:17:12.590 Predictable Latency Mode: Not Supported 00:17:12.590 Traffic Based Keep ALive: Not Supported 00:17:12.590 Namespace Granularity: Not Supported 00:17:12.590 SQ Associations: Not Supported 00:17:12.590 UUID List: Not Supported 00:17:12.590 Multi-Domain Subsystem: Not Supported 00:17:12.590 Fixed Capacity Management: Not Supported 00:17:12.590 Variable Capacity Management: Not Supported 00:17:12.590 Delete Endurance Group: Not Supported 00:17:12.590 Delete NVM Set: Not Supported 00:17:12.590 Extended LBA Formats Supported: Supported 00:17:12.590 Flexible Data Placement Supported: Not Supported 00:17:12.590 00:17:12.590 Controller Memory Buffer Support 00:17:12.590 ================================ 00:17:12.590 Supported: No 00:17:12.590 00:17:12.590 Persistent Memory Region Support 00:17:12.590 ================================ 00:17:12.590 Supported: No 00:17:12.590 00:17:12.590 Admin Command Set Attributes 00:17:12.590 ============================ 00:17:12.590 Security Send/Receive: Not Supported 00:17:12.590 Format NVM: Supported 00:17:12.590 Firmware Activate/Download: Not Supported 00:17:12.590 Namespace Management: Supported 00:17:12.590 Device Self-Test: Not Supported 00:17:12.590 Directives: Supported 00:17:12.590 NVMe-MI: Not Supported 00:17:12.590 Virtualization Management: Not Supported 00:17:12.590 Doorbell Buffer Config: Supported 00:17:12.590 Get LBA Status Capability: Not Supported 00:17:12.590 Command & Feature Lockdown Capability: Not Supported 00:17:12.590 Abort Command Limit: 4 00:17:12.590 Async Event Request Limit: 4 00:17:12.590 Number of Firmware Slots: N/A 00:17:12.590 Firmware Slot 1 Read-Only: N/A 00:17:12.590 Firmware Activation Without Reset: N/A 00:17:12.590 Multiple Update Detection Support: N/A 00:17:12.590 Firmware Update Granularity: No Information Provided 00:17:12.590 Per-Namespace SMART Log: Yes 00:17:12.590 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.590 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:17:12.590 Command Effects Log Page: Supported 00:17:12.590 Get Log Page Extended Data: Supported 00:17:12.590 Telemetry Log Pages: Not Supported 00:17:12.590 Persistent Event Log Pages: Not Supported 00:17:12.590 Supported Log Pages Log Page: May Support 00:17:12.590 Commands Supported & Effects Log Page: Not Supported 00:17:12.590 Feature Identifiers & Effects Log Page:May Support 00:17:12.590 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.590 Data Area 4 for Telemetry Log: Not Supported 00:17:12.590 Error Log Page Entries Supported: 1 00:17:12.590 Keep Alive: Not Supported 00:17:12.590 00:17:12.590 NVM Command Set Attributes 00:17:12.590 ========================== 00:17:12.590 Submission Queue Entry Size 00:17:12.590 Max: 64 00:17:12.590 Min: 64 00:17:12.590 Completion Queue Entry Size 00:17:12.590 Max: 16 00:17:12.590 Min: 16 00:17:12.590 Number of Namespaces: 256 00:17:12.590 Compare Command: Supported 00:17:12.590 Write Uncorrectable Command: Not Supported 00:17:12.590 Dataset Management Command: Supported 00:17:12.590 Write Zeroes Command: Supported 00:17:12.590 Set Features Save Field: Supported 00:17:12.590 Reservations: Not Supported 00:17:12.590 Timestamp: Supported 00:17:12.590 Copy: Supported 00:17:12.590 Volatile Write Cache: Present 00:17:12.590 Atomic Write Unit (Normal): 1 00:17:12.590 Atomic Write Unit (PFail): 1 00:17:12.590 Atomic Compare & Write Unit: 1 00:17:12.590 Fused Compare & Write: Not Supported 00:17:12.590 Scatter-Gather List 00:17:12.590 SGL Command Set: Supported 00:17:12.590 SGL Keyed: Not Supported 00:17:12.590 SGL Bit Bucket Descriptor: Not Supported 00:17:12.590 SGL Metadata Pointer: Not Supported 00:17:12.590 Oversized SGL: Not Supported 00:17:12.590 SGL Metadata Address: Not Supported 00:17:12.590 SGL Offset: Not Supported 00:17:12.590 Transport SGL Data Block: Not Supported 00:17:12.590 Replay Protected Memory Block: Not Supported 00:17:12.590 00:17:12.590 Firmware Slot Information 00:17:12.590 ========================= 00:17:12.590 Active slot: 1 00:17:12.590 Slot 1 Firmware Revision: 1.0 00:17:12.590 00:17:12.590 00:17:12.590 Commands Supported and Effects 00:17:12.590 ============================== 00:17:12.590 Admin Commands 00:17:12.590 -------------- 00:17:12.590 Delete I/O Submission Queue (00h): Supported 00:17:12.590 Create I/O Submission Queue (01h): Supported 00:17:12.590 Get Log Page (02h): Supported 00:17:12.590 Delete I/O Completion Queue (04h): Supported 00:17:12.590 Create I/O Completion Queue (05h): Supported 00:17:12.590 Identify (06h): Supported 00:17:12.590 Abort (08h): Supported 00:17:12.590 Set Features (09h): Supported 00:17:12.590 Get Features (0Ah): Supported 00:17:12.590 Asynchronous Event Request (0Ch): Supported 00:17:12.590 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.590 Directive Send (19h): Supported 00:17:12.590 Directive Receive (1Ah): Supported 00:17:12.590 Virtualization Management (1Ch): Supported 00:17:12.590 Doorbell Buffer Config (7Ch): Supported 00:17:12.590 Format NVM (80h): Supported LBA-Change 00:17:12.590 I/O Commands 00:17:12.590 ------------ 00:17:12.590 Flush (00h): Supported LBA-Change 00:17:12.590 Write (01h): Supported LBA-Change 00:17:12.590 Read (02h): Supported 00:17:12.590 Compare (05h): Supported 00:17:12.590 Write Zeroes (08h): Supported LBA-Change 00:17:12.590 Dataset Management (09h): Supported LBA-Change 00:17:12.590 Unknown (0Ch): Supported 00:17:12.590 Unknown (12h): Supported 00:17:12.590 Copy (19h): Supported LBA-Change 00:17:12.590 Unknown (1Dh): Supported LBA-Change 00:17:12.590 00:17:12.590 Error Log 00:17:12.590 ========= 00:17:12.590 00:17:12.590 Arbitration 00:17:12.590 =========== 00:17:12.590 Arbitration Burst: no limit 00:17:12.590 00:17:12.590 Power Management 00:17:12.590 ================ 00:17:12.590 Number of Power States: 1 00:17:12.590 Current Power State: Power State #0 00:17:12.590 Power State #0: 00:17:12.590 Max Power: 25.00 W 00:17:12.590 Non-Operational State: Operational 00:17:12.590 Entry Latency: 16 microseconds 00:17:12.590 Exit Latency: 4 microseconds 00:17:12.590 Relative Read Throughput: 0 00:17:12.590 Relative Read Latency: 0 00:17:12.590 Relative Write Throughput: 0 00:17:12.590 Relative Write Latency: 0 00:17:12.590 Idle Power: Not Reported 00:17:12.590 Active Power: Not Reported 00:17:12.590 Non-Operational Permissive Mode: Not Supported 00:17:12.590 00:17:12.590 Health Information 00:17:12.590 ================== 00:17:12.590 Critical Warnings: 00:17:12.590 Available Spare Space: OK 00:17:12.590 Temperature: OK 00:17:12.590 Device Reliability: OK 00:17:12.590 Read Only: No 00:17:12.590 Volatile Memory Backup: OK 00:17:12.590 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.590 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.590 Available Spare: 0% 00:17:12.590 Available Spare Threshold: 0% 00:17:12.590 Life Percentage Used: 0% 00:17:12.590 Data Units Read: 1209 00:17:12.590 Data Units Written: 998 00:17:12.590 Host Read Commands: 54526 00:17:12.591 Host Write Commands: 51661 00:17:12.591 Controller Busy Time: 0 minutes 00:17:12.591 Power Cycles: 0 00:17:12.591 Power On Hours: 0 hours 00:17:12.591 Unsafe Shutdowns: 0 00:17:12.591 Unrecoverable Media Errors: 0 00:17:12.591 Lifetime Error Log Entries: 0 00:17:12.591 Warning Temperature Time: 0 minutes 00:17:12.591 Critical Temperature Time: 0 minutes 00:17:12.591 00:17:12.591 Number of Queues 00:17:12.591 ================ 00:17:12.591 Number of I/O Submission Queues: 64 00:17:12.591 Number of I/O Completion Queues: 64 00:17:12.591 00:17:12.591 ZNS Specific Controller Data 00:17:12.591 ============================ 00:17:12.591 Zone Append Size Limit: 0 00:17:12.591 00:17:12.591 00:17:12.591 Active Namespaces 00:17:12.591 ================= 00:17:12.591 Namespace ID:1 00:17:12.591 Error Recovery Timeout: Unlimited 00:17:12.591 Command Set Identifier: NVM (00h) 00:17:12.591 Deallocate: Supported 00:17:12.591 Deallocated/Unwritten Error: Supported 00:17:12.591 Deallocated Read Value: All 0x00 00:17:12.591 Deallocate in Write Zeroes: Not Supported 00:17:12.591 Deallocated Guard Field: 0xFFFF 00:17:12.591 Flush: Supported 00:17:12.591 Reservation: Not Supported 00:17:12.591 Namespace Sharing Capabilities: Private 00:17:12.591 Size (in LBAs): 1310720 (5GiB) 00:17:12.591 Capacity (in LBAs): 1310720 (5GiB) 00:17:12.591 Utilization (in LBAs): 1310720 (5GiB) 00:17:12.591 Thin Provisioning: Not Supported 00:17:12.591 Per-NS Atomic Units: No 00:17:12.591 Maximum Single Source Range Length: 128 00:17:12.591 Maximum Copy Length: 128 00:17:12.591 Maximum Source Range Count: 128 00:17:12.591 NGUID/EUI64 Never Reused: No 00:17:12.591 Namespace Write Protected: No 00:17:12.591 Number of LBA Formats: 8 00:17:12.591 Current LBA Format: LBA Format #04 00:17:12.591 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.591 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.591 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.591 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.591 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.591 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.591 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.591 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.591 00:17:12.591 NVM Specific Namespace Data 00:17:12.591 =========================== 00:17:12.591 Logical Block Storage Tag Mask: 0 00:17:12.591 Protection Information Capabilities: 00:17:12.591 16b Guard Protection Information Storage Tag Support: No 00:17:12.591 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.591 Storage Tag Check Read Support: No 00:17:12.591 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.591 09:31:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:17:12.591 09:31:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:17:12.852 ===================================================== 00:17:12.852 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:12.852 ===================================================== 00:17:12.852 Controller Capabilities/Features 00:17:12.852 ================================ 00:17:12.852 Vendor ID: 1b36 00:17:12.852 Subsystem Vendor ID: 1af4 00:17:12.852 Serial Number: 12342 00:17:12.852 Model Number: QEMU NVMe Ctrl 00:17:12.852 Firmware Version: 8.0.0 00:17:12.852 Recommended Arb Burst: 6 00:17:12.852 IEEE OUI Identifier: 00 54 52 00:17:12.852 Multi-path I/O 00:17:12.852 May have multiple subsystem ports: No 00:17:12.852 May have multiple controllers: No 00:17:12.852 Associated with SR-IOV VF: No 00:17:12.852 Max Data Transfer Size: 524288 00:17:12.852 Max Number of Namespaces: 256 00:17:12.852 Max Number of I/O Queues: 64 00:17:12.852 NVMe Specification Version (VS): 1.4 00:17:12.852 NVMe Specification Version (Identify): 1.4 00:17:12.852 Maximum Queue Entries: 2048 00:17:12.852 Contiguous Queues Required: Yes 00:17:12.852 Arbitration Mechanisms Supported 00:17:12.852 Weighted Round Robin: Not Supported 00:17:12.852 Vendor Specific: Not Supported 00:17:12.852 Reset Timeout: 7500 ms 00:17:12.852 Doorbell Stride: 4 bytes 00:17:12.852 NVM Subsystem Reset: Not Supported 00:17:12.852 Command Sets Supported 00:17:12.852 NVM Command Set: Supported 00:17:12.852 Boot Partition: Not Supported 00:17:12.852 Memory Page Size Minimum: 4096 bytes 00:17:12.852 Memory Page Size Maximum: 65536 bytes 00:17:12.852 Persistent Memory Region: Not Supported 00:17:12.852 Optional Asynchronous Events Supported 00:17:12.852 Namespace Attribute Notices: Supported 00:17:12.852 Firmware Activation Notices: Not Supported 00:17:12.852 ANA Change Notices: Not Supported 00:17:12.852 PLE Aggregate Log Change Notices: Not Supported 00:17:12.852 LBA Status Info Alert Notices: Not Supported 00:17:12.852 EGE Aggregate Log Change Notices: Not Supported 00:17:12.852 Normal NVM Subsystem Shutdown event: Not Supported 00:17:12.852 Zone Descriptor Change Notices: Not Supported 00:17:12.852 Discovery Log Change Notices: Not Supported 00:17:12.852 Controller Attributes 00:17:12.852 128-bit Host Identifier: Not Supported 00:17:12.852 Non-Operational Permissive Mode: Not Supported 00:17:12.852 NVM Sets: Not Supported 00:17:12.852 Read Recovery Levels: Not Supported 00:17:12.852 Endurance Groups: Not Supported 00:17:12.852 Predictable Latency Mode: Not Supported 00:17:12.852 Traffic Based Keep ALive: Not Supported 00:17:12.852 Namespace Granularity: Not Supported 00:17:12.852 SQ Associations: Not Supported 00:17:12.852 UUID List: Not Supported 00:17:12.852 Multi-Domain Subsystem: Not Supported 00:17:12.852 Fixed Capacity Management: Not Supported 00:17:12.852 Variable Capacity Management: Not Supported 00:17:12.852 Delete Endurance Group: Not Supported 00:17:12.852 Delete NVM Set: Not Supported 00:17:12.852 Extended LBA Formats Supported: Supported 00:17:12.852 Flexible Data Placement Supported: Not Supported 00:17:12.852 00:17:12.852 Controller Memory Buffer Support 00:17:12.852 ================================ 00:17:12.852 Supported: No 00:17:12.852 00:17:12.852 Persistent Memory Region Support 00:17:12.852 ================================ 00:17:12.852 Supported: No 00:17:12.852 00:17:12.852 Admin Command Set Attributes 00:17:12.852 ============================ 00:17:12.852 Security Send/Receive: Not Supported 00:17:12.852 Format NVM: Supported 00:17:12.852 Firmware Activate/Download: Not Supported 00:17:12.852 Namespace Management: Supported 00:17:12.852 Device Self-Test: Not Supported 00:17:12.852 Directives: Supported 00:17:12.852 NVMe-MI: Not Supported 00:17:12.852 Virtualization Management: Not Supported 00:17:12.852 Doorbell Buffer Config: Supported 00:17:12.852 Get LBA Status Capability: Not Supported 00:17:12.852 Command & Feature Lockdown Capability: Not Supported 00:17:12.852 Abort Command Limit: 4 00:17:12.852 Async Event Request Limit: 4 00:17:12.852 Number of Firmware Slots: N/A 00:17:12.852 Firmware Slot 1 Read-Only: N/A 00:17:12.852 Firmware Activation Without Reset: N/A 00:17:12.852 Multiple Update Detection Support: N/A 00:17:12.852 Firmware Update Granularity: No Information Provided 00:17:12.852 Per-Namespace SMART Log: Yes 00:17:12.852 Asymmetric Namespace Access Log Page: Not Supported 00:17:12.852 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:17:12.852 Command Effects Log Page: Supported 00:17:12.852 Get Log Page Extended Data: Supported 00:17:12.852 Telemetry Log Pages: Not Supported 00:17:12.852 Persistent Event Log Pages: Not Supported 00:17:12.852 Supported Log Pages Log Page: May Support 00:17:12.852 Commands Supported & Effects Log Page: Not Supported 00:17:12.852 Feature Identifiers & Effects Log Page:May Support 00:17:12.852 NVMe-MI Commands & Effects Log Page: May Support 00:17:12.852 Data Area 4 for Telemetry Log: Not Supported 00:17:12.852 Error Log Page Entries Supported: 1 00:17:12.852 Keep Alive: Not Supported 00:17:12.852 00:17:12.852 NVM Command Set Attributes 00:17:12.852 ========================== 00:17:12.852 Submission Queue Entry Size 00:17:12.853 Max: 64 00:17:12.853 Min: 64 00:17:12.853 Completion Queue Entry Size 00:17:12.853 Max: 16 00:17:12.853 Min: 16 00:17:12.853 Number of Namespaces: 256 00:17:12.853 Compare Command: Supported 00:17:12.853 Write Uncorrectable Command: Not Supported 00:17:12.853 Dataset Management Command: Supported 00:17:12.853 Write Zeroes Command: Supported 00:17:12.853 Set Features Save Field: Supported 00:17:12.853 Reservations: Not Supported 00:17:12.853 Timestamp: Supported 00:17:12.853 Copy: Supported 00:17:12.853 Volatile Write Cache: Present 00:17:12.853 Atomic Write Unit (Normal): 1 00:17:12.853 Atomic Write Unit (PFail): 1 00:17:12.853 Atomic Compare & Write Unit: 1 00:17:12.853 Fused Compare & Write: Not Supported 00:17:12.853 Scatter-Gather List 00:17:12.853 SGL Command Set: Supported 00:17:12.853 SGL Keyed: Not Supported 00:17:12.853 SGL Bit Bucket Descriptor: Not Supported 00:17:12.853 SGL Metadata Pointer: Not Supported 00:17:12.853 Oversized SGL: Not Supported 00:17:12.853 SGL Metadata Address: Not Supported 00:17:12.853 SGL Offset: Not Supported 00:17:12.853 Transport SGL Data Block: Not Supported 00:17:12.853 Replay Protected Memory Block: Not Supported 00:17:12.853 00:17:12.853 Firmware Slot Information 00:17:12.853 ========================= 00:17:12.853 Active slot: 1 00:17:12.853 Slot 1 Firmware Revision: 1.0 00:17:12.853 00:17:12.853 00:17:12.853 Commands Supported and Effects 00:17:12.853 ============================== 00:17:12.853 Admin Commands 00:17:12.853 -------------- 00:17:12.853 Delete I/O Submission Queue (00h): Supported 00:17:12.853 Create I/O Submission Queue (01h): Supported 00:17:12.853 Get Log Page (02h): Supported 00:17:12.853 Delete I/O Completion Queue (04h): Supported 00:17:12.853 Create I/O Completion Queue (05h): Supported 00:17:12.853 Identify (06h): Supported 00:17:12.853 Abort (08h): Supported 00:17:12.853 Set Features (09h): Supported 00:17:12.853 Get Features (0Ah): Supported 00:17:12.853 Asynchronous Event Request (0Ch): Supported 00:17:12.853 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:12.853 Directive Send (19h): Supported 00:17:12.853 Directive Receive (1Ah): Supported 00:17:12.853 Virtualization Management (1Ch): Supported 00:17:12.853 Doorbell Buffer Config (7Ch): Supported 00:17:12.853 Format NVM (80h): Supported LBA-Change 00:17:12.853 I/O Commands 00:17:12.853 ------------ 00:17:12.853 Flush (00h): Supported LBA-Change 00:17:12.853 Write (01h): Supported LBA-Change 00:17:12.853 Read (02h): Supported 00:17:12.853 Compare (05h): Supported 00:17:12.853 Write Zeroes (08h): Supported LBA-Change 00:17:12.853 Dataset Management (09h): Supported LBA-Change 00:17:12.853 Unknown (0Ch): Supported 00:17:12.853 Unknown (12h): Supported 00:17:12.853 Copy (19h): Supported LBA-Change 00:17:12.853 Unknown (1Dh): Supported LBA-Change 00:17:12.853 00:17:12.853 Error Log 00:17:12.853 ========= 00:17:12.853 00:17:12.853 Arbitration 00:17:12.853 =========== 00:17:12.853 Arbitration Burst: no limit 00:17:12.853 00:17:12.853 Power Management 00:17:12.853 ================ 00:17:12.853 Number of Power States: 1 00:17:12.853 Current Power State: Power State #0 00:17:12.853 Power State #0: 00:17:12.853 Max Power: 25.00 W 00:17:12.853 Non-Operational State: Operational 00:17:12.853 Entry Latency: 16 microseconds 00:17:12.853 Exit Latency: 4 microseconds 00:17:12.853 Relative Read Throughput: 0 00:17:12.853 Relative Read Latency: 0 00:17:12.853 Relative Write Throughput: 0 00:17:12.853 Relative Write Latency: 0 00:17:12.853 Idle Power: Not Reported 00:17:12.853 Active Power: Not Reported 00:17:12.853 Non-Operational Permissive Mode: Not Supported 00:17:12.853 00:17:12.853 Health Information 00:17:12.853 ================== 00:17:12.853 Critical Warnings: 00:17:12.853 Available Spare Space: OK 00:17:12.853 Temperature: OK 00:17:12.853 Device Reliability: OK 00:17:12.853 Read Only: No 00:17:12.853 Volatile Memory Backup: OK 00:17:12.853 Current Temperature: 323 Kelvin (50 Celsius) 00:17:12.853 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:12.853 Available Spare: 0% 00:17:12.853 Available Spare Threshold: 0% 00:17:12.853 Life Percentage Used: 0% 00:17:12.853 Data Units Read: 2401 00:17:12.853 Data Units Written: 2081 00:17:12.853 Host Read Commands: 110722 00:17:12.853 Host Write Commands: 106495 00:17:12.853 Controller Busy Time: 0 minutes 00:17:12.853 Power Cycles: 0 00:17:12.853 Power On Hours: 0 hours 00:17:12.853 Unsafe Shutdowns: 0 00:17:12.853 Unrecoverable Media Errors: 0 00:17:12.853 Lifetime Error Log Entries: 0 00:17:12.853 Warning Temperature Time: 0 minutes 00:17:12.853 Critical Temperature Time: 0 minutes 00:17:12.853 00:17:12.853 Number of Queues 00:17:12.853 ================ 00:17:12.853 Number of I/O Submission Queues: 64 00:17:12.853 Number of I/O Completion Queues: 64 00:17:12.853 00:17:12.853 ZNS Specific Controller Data 00:17:12.853 ============================ 00:17:12.853 Zone Append Size Limit: 0 00:17:12.853 00:17:12.853 00:17:12.853 Active Namespaces 00:17:12.853 ================= 00:17:12.853 Namespace ID:1 00:17:12.853 Error Recovery Timeout: Unlimited 00:17:12.853 Command Set Identifier: NVM (00h) 00:17:12.853 Deallocate: Supported 00:17:12.853 Deallocated/Unwritten Error: Supported 00:17:12.853 Deallocated Read Value: All 0x00 00:17:12.853 Deallocate in Write Zeroes: Not Supported 00:17:12.853 Deallocated Guard Field: 0xFFFF 00:17:12.853 Flush: Supported 00:17:12.853 Reservation: Not Supported 00:17:12.853 Namespace Sharing Capabilities: Private 00:17:12.853 Size (in LBAs): 1048576 (4GiB) 00:17:12.853 Capacity (in LBAs): 1048576 (4GiB) 00:17:12.853 Utilization (in LBAs): 1048576 (4GiB) 00:17:12.853 Thin Provisioning: Not Supported 00:17:12.853 Per-NS Atomic Units: No 00:17:12.853 Maximum Single Source Range Length: 128 00:17:12.853 Maximum Copy Length: 128 00:17:12.853 Maximum Source Range Count: 128 00:17:12.853 NGUID/EUI64 Never Reused: No 00:17:12.853 Namespace Write Protected: No 00:17:12.853 Number of LBA Formats: 8 00:17:12.853 Current LBA Format: LBA Format #04 00:17:12.853 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.853 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.853 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.853 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.853 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.853 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.853 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.853 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.853 00:17:12.853 NVM Specific Namespace Data 00:17:12.853 =========================== 00:17:12.853 Logical Block Storage Tag Mask: 0 00:17:12.853 Protection Information Capabilities: 00:17:12.853 16b Guard Protection Information Storage Tag Support: No 00:17:12.853 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.853 Storage Tag Check Read Support: No 00:17:12.853 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.853 Namespace ID:2 00:17:12.853 Error Recovery Timeout: Unlimited 00:17:12.853 Command Set Identifier: NVM (00h) 00:17:12.853 Deallocate: Supported 00:17:12.853 Deallocated/Unwritten Error: Supported 00:17:12.853 Deallocated Read Value: All 0x00 00:17:12.853 Deallocate in Write Zeroes: Not Supported 00:17:12.853 Deallocated Guard Field: 0xFFFF 00:17:12.853 Flush: Supported 00:17:12.853 Reservation: Not Supported 00:17:12.853 Namespace Sharing Capabilities: Private 00:17:12.853 Size (in LBAs): 1048576 (4GiB) 00:17:12.853 Capacity (in LBAs): 1048576 (4GiB) 00:17:12.853 Utilization (in LBAs): 1048576 (4GiB) 00:17:12.853 Thin Provisioning: Not Supported 00:17:12.853 Per-NS Atomic Units: No 00:17:12.853 Maximum Single Source Range Length: 128 00:17:12.853 Maximum Copy Length: 128 00:17:12.853 Maximum Source Range Count: 128 00:17:12.853 NGUID/EUI64 Never Reused: No 00:17:12.853 Namespace Write Protected: No 00:17:12.853 Number of LBA Formats: 8 00:17:12.854 Current LBA Format: LBA Format #04 00:17:12.854 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.854 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.854 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.854 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.854 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.854 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.854 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.854 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.854 00:17:12.854 NVM Specific Namespace Data 00:17:12.854 =========================== 00:17:12.854 Logical Block Storage Tag Mask: 0 00:17:12.854 Protection Information Capabilities: 00:17:12.854 16b Guard Protection Information Storage Tag Support: No 00:17:12.854 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.854 Storage Tag Check Read Support: No 00:17:12.854 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Namespace ID:3 00:17:12.854 Error Recovery Timeout: Unlimited 00:17:12.854 Command Set Identifier: NVM (00h) 00:17:12.854 Deallocate: Supported 00:17:12.854 Deallocated/Unwritten Error: Supported 00:17:12.854 Deallocated Read Value: All 0x00 00:17:12.854 Deallocate in Write Zeroes: Not Supported 00:17:12.854 Deallocated Guard Field: 0xFFFF 00:17:12.854 Flush: Supported 00:17:12.854 Reservation: Not Supported 00:17:12.854 Namespace Sharing Capabilities: Private 00:17:12.854 Size (in LBAs): 1048576 (4GiB) 00:17:12.854 Capacity (in LBAs): 1048576 (4GiB) 00:17:12.854 Utilization (in LBAs): 1048576 (4GiB) 00:17:12.854 Thin Provisioning: Not Supported 00:17:12.854 Per-NS Atomic Units: No 00:17:12.854 Maximum Single Source Range Length: 128 00:17:12.854 Maximum Copy Length: 128 00:17:12.854 Maximum Source Range Count: 128 00:17:12.854 NGUID/EUI64 Never Reused: No 00:17:12.854 Namespace Write Protected: No 00:17:12.854 Number of LBA Formats: 8 00:17:12.854 Current LBA Format: LBA Format #04 00:17:12.854 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:12.854 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:12.854 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:12.854 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:12.854 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:12.854 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:12.854 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:12.854 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:12.854 00:17:12.854 NVM Specific Namespace Data 00:17:12.854 =========================== 00:17:12.854 Logical Block Storage Tag Mask: 0 00:17:12.854 Protection Information Capabilities: 00:17:12.854 16b Guard Protection Information Storage Tag Support: No 00:17:12.854 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:12.854 Storage Tag Check Read Support: No 00:17:12.854 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:12.854 09:31:13 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:17:12.854 09:31:13 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:17:13.113 ===================================================== 00:17:13.113 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:13.113 ===================================================== 00:17:13.113 Controller Capabilities/Features 00:17:13.113 ================================ 00:17:13.113 Vendor ID: 1b36 00:17:13.113 Subsystem Vendor ID: 1af4 00:17:13.113 Serial Number: 12343 00:17:13.113 Model Number: QEMU NVMe Ctrl 00:17:13.113 Firmware Version: 8.0.0 00:17:13.113 Recommended Arb Burst: 6 00:17:13.113 IEEE OUI Identifier: 00 54 52 00:17:13.113 Multi-path I/O 00:17:13.113 May have multiple subsystem ports: No 00:17:13.113 May have multiple controllers: Yes 00:17:13.113 Associated with SR-IOV VF: No 00:17:13.113 Max Data Transfer Size: 524288 00:17:13.113 Max Number of Namespaces: 256 00:17:13.113 Max Number of I/O Queues: 64 00:17:13.113 NVMe Specification Version (VS): 1.4 00:17:13.113 NVMe Specification Version (Identify): 1.4 00:17:13.113 Maximum Queue Entries: 2048 00:17:13.113 Contiguous Queues Required: Yes 00:17:13.113 Arbitration Mechanisms Supported 00:17:13.113 Weighted Round Robin: Not Supported 00:17:13.113 Vendor Specific: Not Supported 00:17:13.113 Reset Timeout: 7500 ms 00:17:13.113 Doorbell Stride: 4 bytes 00:17:13.113 NVM Subsystem Reset: Not Supported 00:17:13.113 Command Sets Supported 00:17:13.113 NVM Command Set: Supported 00:17:13.113 Boot Partition: Not Supported 00:17:13.113 Memory Page Size Minimum: 4096 bytes 00:17:13.113 Memory Page Size Maximum: 65536 bytes 00:17:13.113 Persistent Memory Region: Not Supported 00:17:13.113 Optional Asynchronous Events Supported 00:17:13.113 Namespace Attribute Notices: Supported 00:17:13.113 Firmware Activation Notices: Not Supported 00:17:13.113 ANA Change Notices: Not Supported 00:17:13.113 PLE Aggregate Log Change Notices: Not Supported 00:17:13.113 LBA Status Info Alert Notices: Not Supported 00:17:13.113 EGE Aggregate Log Change Notices: Not Supported 00:17:13.113 Normal NVM Subsystem Shutdown event: Not Supported 00:17:13.113 Zone Descriptor Change Notices: Not Supported 00:17:13.113 Discovery Log Change Notices: Not Supported 00:17:13.113 Controller Attributes 00:17:13.113 128-bit Host Identifier: Not Supported 00:17:13.113 Non-Operational Permissive Mode: Not Supported 00:17:13.113 NVM Sets: Not Supported 00:17:13.113 Read Recovery Levels: Not Supported 00:17:13.113 Endurance Groups: Supported 00:17:13.113 Predictable Latency Mode: Not Supported 00:17:13.113 Traffic Based Keep ALive: Not Supported 00:17:13.113 Namespace Granularity: Not Supported 00:17:13.113 SQ Associations: Not Supported 00:17:13.113 UUID List: Not Supported 00:17:13.113 Multi-Domain Subsystem: Not Supported 00:17:13.113 Fixed Capacity Management: Not Supported 00:17:13.113 Variable Capacity Management: Not Supported 00:17:13.114 Delete Endurance Group: Not Supported 00:17:13.114 Delete NVM Set: Not Supported 00:17:13.114 Extended LBA Formats Supported: Supported 00:17:13.114 Flexible Data Placement Supported: Supported 00:17:13.114 00:17:13.114 Controller Memory Buffer Support 00:17:13.114 ================================ 00:17:13.114 Supported: No 00:17:13.114 00:17:13.114 Persistent Memory Region Support 00:17:13.114 ================================ 00:17:13.114 Supported: No 00:17:13.114 00:17:13.114 Admin Command Set Attributes 00:17:13.114 ============================ 00:17:13.114 Security Send/Receive: Not Supported 00:17:13.114 Format NVM: Supported 00:17:13.114 Firmware Activate/Download: Not Supported 00:17:13.114 Namespace Management: Supported 00:17:13.114 Device Self-Test: Not Supported 00:17:13.114 Directives: Supported 00:17:13.114 NVMe-MI: Not Supported 00:17:13.114 Virtualization Management: Not Supported 00:17:13.114 Doorbell Buffer Config: Supported 00:17:13.114 Get LBA Status Capability: Not Supported 00:17:13.114 Command & Feature Lockdown Capability: Not Supported 00:17:13.114 Abort Command Limit: 4 00:17:13.114 Async Event Request Limit: 4 00:17:13.114 Number of Firmware Slots: N/A 00:17:13.114 Firmware Slot 1 Read-Only: N/A 00:17:13.114 Firmware Activation Without Reset: N/A 00:17:13.114 Multiple Update Detection Support: N/A 00:17:13.114 Firmware Update Granularity: No Information Provided 00:17:13.114 Per-Namespace SMART Log: Yes 00:17:13.114 Asymmetric Namespace Access Log Page: Not Supported 00:17:13.114 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:17:13.114 Command Effects Log Page: Supported 00:17:13.114 Get Log Page Extended Data: Supported 00:17:13.114 Telemetry Log Pages: Not Supported 00:17:13.114 Persistent Event Log Pages: Not Supported 00:17:13.114 Supported Log Pages Log Page: May Support 00:17:13.114 Commands Supported & Effects Log Page: Not Supported 00:17:13.114 Feature Identifiers & Effects Log Page:May Support 00:17:13.114 NVMe-MI Commands & Effects Log Page: May Support 00:17:13.114 Data Area 4 for Telemetry Log: Not Supported 00:17:13.114 Error Log Page Entries Supported: 1 00:17:13.114 Keep Alive: Not Supported 00:17:13.114 00:17:13.114 NVM Command Set Attributes 00:17:13.114 ========================== 00:17:13.114 Submission Queue Entry Size 00:17:13.114 Max: 64 00:17:13.114 Min: 64 00:17:13.114 Completion Queue Entry Size 00:17:13.114 Max: 16 00:17:13.114 Min: 16 00:17:13.114 Number of Namespaces: 256 00:17:13.114 Compare Command: Supported 00:17:13.114 Write Uncorrectable Command: Not Supported 00:17:13.114 Dataset Management Command: Supported 00:17:13.114 Write Zeroes Command: Supported 00:17:13.114 Set Features Save Field: Supported 00:17:13.114 Reservations: Not Supported 00:17:13.114 Timestamp: Supported 00:17:13.114 Copy: Supported 00:17:13.114 Volatile Write Cache: Present 00:17:13.114 Atomic Write Unit (Normal): 1 00:17:13.114 Atomic Write Unit (PFail): 1 00:17:13.114 Atomic Compare & Write Unit: 1 00:17:13.114 Fused Compare & Write: Not Supported 00:17:13.114 Scatter-Gather List 00:17:13.114 SGL Command Set: Supported 00:17:13.114 SGL Keyed: Not Supported 00:17:13.114 SGL Bit Bucket Descriptor: Not Supported 00:17:13.114 SGL Metadata Pointer: Not Supported 00:17:13.114 Oversized SGL: Not Supported 00:17:13.114 SGL Metadata Address: Not Supported 00:17:13.114 SGL Offset: Not Supported 00:17:13.114 Transport SGL Data Block: Not Supported 00:17:13.114 Replay Protected Memory Block: Not Supported 00:17:13.114 00:17:13.114 Firmware Slot Information 00:17:13.114 ========================= 00:17:13.114 Active slot: 1 00:17:13.114 Slot 1 Firmware Revision: 1.0 00:17:13.114 00:17:13.114 00:17:13.114 Commands Supported and Effects 00:17:13.114 ============================== 00:17:13.114 Admin Commands 00:17:13.114 -------------- 00:17:13.114 Delete I/O Submission Queue (00h): Supported 00:17:13.114 Create I/O Submission Queue (01h): Supported 00:17:13.114 Get Log Page (02h): Supported 00:17:13.114 Delete I/O Completion Queue (04h): Supported 00:17:13.114 Create I/O Completion Queue (05h): Supported 00:17:13.114 Identify (06h): Supported 00:17:13.114 Abort (08h): Supported 00:17:13.114 Set Features (09h): Supported 00:17:13.114 Get Features (0Ah): Supported 00:17:13.114 Asynchronous Event Request (0Ch): Supported 00:17:13.114 Namespace Attachment (15h): Supported NS-Inventory-Change 00:17:13.114 Directive Send (19h): Supported 00:17:13.114 Directive Receive (1Ah): Supported 00:17:13.114 Virtualization Management (1Ch): Supported 00:17:13.114 Doorbell Buffer Config (7Ch): Supported 00:17:13.114 Format NVM (80h): Supported LBA-Change 00:17:13.114 I/O Commands 00:17:13.114 ------------ 00:17:13.114 Flush (00h): Supported LBA-Change 00:17:13.114 Write (01h): Supported LBA-Change 00:17:13.114 Read (02h): Supported 00:17:13.114 Compare (05h): Supported 00:17:13.114 Write Zeroes (08h): Supported LBA-Change 00:17:13.114 Dataset Management (09h): Supported LBA-Change 00:17:13.114 Unknown (0Ch): Supported 00:17:13.114 Unknown (12h): Supported 00:17:13.114 Copy (19h): Supported LBA-Change 00:17:13.114 Unknown (1Dh): Supported LBA-Change 00:17:13.114 00:17:13.114 Error Log 00:17:13.114 ========= 00:17:13.114 00:17:13.114 Arbitration 00:17:13.114 =========== 00:17:13.114 Arbitration Burst: no limit 00:17:13.114 00:17:13.114 Power Management 00:17:13.114 ================ 00:17:13.114 Number of Power States: 1 00:17:13.114 Current Power State: Power State #0 00:17:13.114 Power State #0: 00:17:13.114 Max Power: 25.00 W 00:17:13.114 Non-Operational State: Operational 00:17:13.114 Entry Latency: 16 microseconds 00:17:13.114 Exit Latency: 4 microseconds 00:17:13.114 Relative Read Throughput: 0 00:17:13.114 Relative Read Latency: 0 00:17:13.114 Relative Write Throughput: 0 00:17:13.114 Relative Write Latency: 0 00:17:13.114 Idle Power: Not Reported 00:17:13.114 Active Power: Not Reported 00:17:13.114 Non-Operational Permissive Mode: Not Supported 00:17:13.114 00:17:13.114 Health Information 00:17:13.114 ================== 00:17:13.114 Critical Warnings: 00:17:13.114 Available Spare Space: OK 00:17:13.114 Temperature: OK 00:17:13.114 Device Reliability: OK 00:17:13.114 Read Only: No 00:17:13.114 Volatile Memory Backup: OK 00:17:13.114 Current Temperature: 323 Kelvin (50 Celsius) 00:17:13.114 Temperature Threshold: 343 Kelvin (70 Celsius) 00:17:13.114 Available Spare: 0% 00:17:13.114 Available Spare Threshold: 0% 00:17:13.114 Life Percentage Used: 0% 00:17:13.114 Data Units Read: 837 00:17:13.114 Data Units Written: 730 00:17:13.114 Host Read Commands: 37229 00:17:13.114 Host Write Commands: 35819 00:17:13.114 Controller Busy Time: 0 minutes 00:17:13.114 Power Cycles: 0 00:17:13.114 Power On Hours: 0 hours 00:17:13.114 Unsafe Shutdowns: 0 00:17:13.114 Unrecoverable Media Errors: 0 00:17:13.114 Lifetime Error Log Entries: 0 00:17:13.114 Warning Temperature Time: 0 minutes 00:17:13.114 Critical Temperature Time: 0 minutes 00:17:13.114 00:17:13.114 Number of Queues 00:17:13.114 ================ 00:17:13.114 Number of I/O Submission Queues: 64 00:17:13.114 Number of I/O Completion Queues: 64 00:17:13.114 00:17:13.114 ZNS Specific Controller Data 00:17:13.114 ============================ 00:17:13.114 Zone Append Size Limit: 0 00:17:13.114 00:17:13.114 00:17:13.114 Active Namespaces 00:17:13.114 ================= 00:17:13.114 Namespace ID:1 00:17:13.114 Error Recovery Timeout: Unlimited 00:17:13.114 Command Set Identifier: NVM (00h) 00:17:13.114 Deallocate: Supported 00:17:13.114 Deallocated/Unwritten Error: Supported 00:17:13.114 Deallocated Read Value: All 0x00 00:17:13.114 Deallocate in Write Zeroes: Not Supported 00:17:13.114 Deallocated Guard Field: 0xFFFF 00:17:13.114 Flush: Supported 00:17:13.114 Reservation: Not Supported 00:17:13.114 Namespace Sharing Capabilities: Multiple Controllers 00:17:13.114 Size (in LBAs): 262144 (1GiB) 00:17:13.114 Capacity (in LBAs): 262144 (1GiB) 00:17:13.114 Utilization (in LBAs): 262144 (1GiB) 00:17:13.114 Thin Provisioning: Not Supported 00:17:13.114 Per-NS Atomic Units: No 00:17:13.114 Maximum Single Source Range Length: 128 00:17:13.114 Maximum Copy Length: 128 00:17:13.114 Maximum Source Range Count: 128 00:17:13.114 NGUID/EUI64 Never Reused: No 00:17:13.114 Namespace Write Protected: No 00:17:13.114 Endurance group ID: 1 00:17:13.114 Number of LBA Formats: 8 00:17:13.114 Current LBA Format: LBA Format #04 00:17:13.114 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:13.115 LBA Format #01: Data Size: 512 Metadata Size: 8 00:17:13.115 LBA Format #02: Data Size: 512 Metadata Size: 16 00:17:13.115 LBA Format #03: Data Size: 512 Metadata Size: 64 00:17:13.115 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:17:13.115 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:17:13.115 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:17:13.115 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:17:13.115 00:17:13.115 Get Feature FDP: 00:17:13.115 ================ 00:17:13.115 Enabled: Yes 00:17:13.115 FDP configuration index: 0 00:17:13.115 00:17:13.115 FDP configurations log page 00:17:13.115 =========================== 00:17:13.115 Number of FDP configurations: 1 00:17:13.115 Version: 0 00:17:13.115 Size: 112 00:17:13.115 FDP Configuration Descriptor: 0 00:17:13.115 Descriptor Size: 96 00:17:13.115 Reclaim Group Identifier format: 2 00:17:13.115 FDP Volatile Write Cache: Not Present 00:17:13.115 FDP Configuration: Valid 00:17:13.115 Vendor Specific Size: 0 00:17:13.115 Number of Reclaim Groups: 2 00:17:13.115 Number of Recalim Unit Handles: 8 00:17:13.115 Max Placement Identifiers: 128 00:17:13.115 Number of Namespaces Suppprted: 256 00:17:13.115 Reclaim unit Nominal Size: 6000000 bytes 00:17:13.115 Estimated Reclaim Unit Time Limit: Not Reported 00:17:13.115 RUH Desc #000: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #001: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #002: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #003: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #004: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #005: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #006: RUH Type: Initially Isolated 00:17:13.115 RUH Desc #007: RUH Type: Initially Isolated 00:17:13.115 00:17:13.115 FDP reclaim unit handle usage log page 00:17:13.115 ====================================== 00:17:13.115 Number of Reclaim Unit Handles: 8 00:17:13.115 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:17:13.115 RUH Usage Desc #001: RUH Attributes: Unused 00:17:13.115 RUH Usage Desc #002: RUH Attributes: Unused 00:17:13.115 RUH Usage Desc #003: RUH Attributes: Unused 00:17:13.115 RUH Usage Desc #004: RUH Attributes: Unused 00:17:13.115 RUH Usage Desc #005: RUH Attributes: Unused 00:17:13.115 RUH Usage Desc #006: RUH Attributes: Unused 00:17:13.115 RUH Usage Desc #007: RUH Attributes: Unused 00:17:13.115 00:17:13.115 FDP statistics log page 00:17:13.115 ======================= 00:17:13.115 Host bytes with metadata written: 472424448 00:17:13.115 Media bytes with metadata written: 472477696 00:17:13.115 Media bytes erased: 0 00:17:13.115 00:17:13.115 FDP events log page 00:17:13.115 =================== 00:17:13.115 Number of FDP events: 0 00:17:13.115 00:17:13.115 NVM Specific Namespace Data 00:17:13.115 =========================== 00:17:13.115 Logical Block Storage Tag Mask: 0 00:17:13.115 Protection Information Capabilities: 00:17:13.115 16b Guard Protection Information Storage Tag Support: No 00:17:13.115 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:17:13.115 Storage Tag Check Read Support: No 00:17:13.115 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:17:13.115 00:17:13.115 real 0m1.332s 00:17:13.115 user 0m0.491s 00:17:13.115 sys 0m0.652s 00:17:13.115 09:31:13 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.115 09:31:13 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:17:13.115 ************************************ 00:17:13.115 END TEST nvme_identify 00:17:13.115 ************************************ 00:17:13.375 09:31:13 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:17:13.375 09:31:13 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:13.375 09:31:13 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.375 09:31:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:13.375 ************************************ 00:17:13.375 START TEST nvme_perf 00:17:13.375 ************************************ 00:17:13.375 09:31:13 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:17:13.375 09:31:13 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:17:14.763 Initializing NVMe Controllers 00:17:14.763 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:14.763 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:14.763 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:14.763 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:14.763 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:14.763 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:17:14.763 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:17:14.763 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:17:14.763 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:17:14.763 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:17:14.763 Initialization complete. Launching workers. 00:17:14.763 ======================================================== 00:17:14.763 Latency(us) 00:17:14.763 Device Information : IOPS MiB/s Average min max 00:17:14.763 PCIE (0000:00:10.0) NSID 1 from core 0: 15743.61 184.50 8146.37 6870.38 45836.76 00:17:14.763 PCIE (0000:00:11.0) NSID 1 from core 0: 15743.61 184.50 8131.85 6952.27 43515.32 00:17:14.763 PCIE (0000:00:13.0) NSID 1 from core 0: 15743.61 184.50 8116.14 6945.61 41765.91 00:17:14.763 PCIE (0000:00:12.0) NSID 1 from core 0: 15743.61 184.50 8100.93 6948.41 39614.43 00:17:14.763 PCIE (0000:00:12.0) NSID 2 from core 0: 15743.61 184.50 8085.86 6923.53 37475.58 00:17:14.763 PCIE (0000:00:12.0) NSID 3 from core 0: 15807.60 185.25 8037.13 6952.77 30640.72 00:17:14.763 ======================================================== 00:17:14.763 Total : 94525.64 1107.72 8103.00 6870.38 45836.76 00:17:14.763 00:17:14.763 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:17:14.763 ================================================================================= 00:17:14.763 1.00000% : 7040.112us 00:17:14.763 10.00000% : 7297.677us 00:17:14.763 25.00000% : 7498.005us 00:17:14.763 50.00000% : 7784.189us 00:17:14.763 75.00000% : 8127.609us 00:17:14.763 90.00000% : 8356.555us 00:17:14.763 95.00000% : 8814.449us 00:17:14.763 98.00000% : 10932.206us 00:17:14.763 99.00000% : 14652.590us 00:17:14.763 99.50000% : 38234.103us 00:17:14.763 99.90000% : 45560.398us 00:17:14.763 99.99000% : 45789.345us 00:17:14.763 99.99900% : 46018.292us 00:17:14.763 99.99990% : 46018.292us 00:17:14.763 99.99999% : 46018.292us 00:17:14.763 00:17:14.763 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:17:14.763 ================================================================================= 00:17:14.763 1.00000% : 7125.967us 00:17:14.763 10.00000% : 7383.532us 00:17:14.763 25.00000% : 7555.242us 00:17:14.763 50.00000% : 7784.189us 00:17:14.763 75.00000% : 8070.372us 00:17:14.763 90.00000% : 8242.082us 00:17:14.763 95.00000% : 8814.449us 00:17:14.763 98.00000% : 11218.390us 00:17:14.763 99.00000% : 14881.537us 00:17:14.763 99.50000% : 36173.583us 00:17:14.763 99.90000% : 43270.931us 00:17:14.763 99.99000% : 43499.878us 00:17:14.763 99.99900% : 43728.824us 00:17:14.763 99.99990% : 43728.824us 00:17:14.763 99.99999% : 43728.824us 00:17:14.763 00:17:14.763 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:17:14.763 ================================================================================= 00:17:14.763 1.00000% : 7125.967us 00:17:14.763 10.00000% : 7383.532us 00:17:14.763 25.00000% : 7555.242us 00:17:14.763 50.00000% : 7784.189us 00:17:14.763 75.00000% : 8070.372us 00:17:14.763 90.00000% : 8242.082us 00:17:14.763 95.00000% : 8699.976us 00:17:14.763 98.00000% : 12134.176us 00:17:14.763 99.00000% : 15453.904us 00:17:14.763 99.50000% : 34799.902us 00:17:14.763 99.90000% : 41439.357us 00:17:14.763 99.99000% : 41897.251us 00:17:14.763 99.99900% : 41897.251us 00:17:14.763 99.99990% : 41897.251us 00:17:14.763 99.99999% : 41897.251us 00:17:14.763 00:17:14.764 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:17:14.764 ================================================================================= 00:17:14.764 1.00000% : 7125.967us 00:17:14.764 10.00000% : 7383.532us 00:17:14.764 25.00000% : 7555.242us 00:17:14.764 50.00000% : 7784.189us 00:17:14.764 75.00000% : 8070.372us 00:17:14.764 90.00000% : 8299.319us 00:17:14.764 95.00000% : 8814.449us 00:17:14.764 98.00000% : 11619.046us 00:17:14.764 99.00000% : 15110.484us 00:17:14.764 99.50000% : 32739.382us 00:17:14.764 99.90000% : 39378.837us 00:17:14.764 99.99000% : 39607.783us 00:17:14.764 99.99900% : 39836.730us 00:17:14.764 99.99990% : 39836.730us 00:17:14.764 99.99999% : 39836.730us 00:17:14.764 00:17:14.764 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:17:14.764 ================================================================================= 00:17:14.764 1.00000% : 7125.967us 00:17:14.764 10.00000% : 7383.532us 00:17:14.764 25.00000% : 7555.242us 00:17:14.764 50.00000% : 7784.189us 00:17:14.764 75.00000% : 8070.372us 00:17:14.764 90.00000% : 8242.082us 00:17:14.764 95.00000% : 8814.449us 00:17:14.764 98.00000% : 11275.626us 00:17:14.764 99.00000% : 14538.117us 00:17:14.764 99.50000% : 30907.808us 00:17:14.764 99.90000% : 37089.369us 00:17:14.764 99.99000% : 37547.263us 00:17:14.764 99.99900% : 37547.263us 00:17:14.764 99.99990% : 37547.263us 00:17:14.764 99.99999% : 37547.263us 00:17:14.764 00:17:14.764 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:17:14.764 ================================================================================= 00:17:14.764 1.00000% : 7125.967us 00:17:14.764 10.00000% : 7383.532us 00:17:14.764 25.00000% : 7555.242us 00:17:14.764 50.00000% : 7784.189us 00:17:14.764 75.00000% : 8070.372us 00:17:14.764 90.00000% : 8299.319us 00:17:14.764 95.00000% : 9215.106us 00:17:14.764 98.00000% : 11275.626us 00:17:14.764 99.00000% : 14538.117us 00:17:14.764 99.50000% : 23009.146us 00:17:14.764 99.90000% : 30220.968us 00:17:14.764 99.99000% : 30678.861us 00:17:14.764 99.99900% : 30678.861us 00:17:14.764 99.99990% : 30678.861us 00:17:14.764 99.99999% : 30678.861us 00:17:14.764 00:17:14.764 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:17:14.764 ============================================================================== 00:17:14.764 Range in us Cumulative IO count 00:17:14.764 6868.402 - 6897.020: 0.0826% ( 13) 00:17:14.764 6897.020 - 6925.638: 0.1715% ( 14) 00:17:14.764 6925.638 - 6954.257: 0.3493% ( 28) 00:17:14.764 6954.257 - 6982.875: 0.5526% ( 32) 00:17:14.764 6982.875 - 7011.493: 0.8575% ( 48) 00:17:14.764 7011.493 - 7040.112: 1.1560% ( 47) 00:17:14.764 7040.112 - 7068.730: 1.5689% ( 65) 00:17:14.764 7068.730 - 7097.348: 2.1151% ( 86) 00:17:14.764 7097.348 - 7125.967: 2.8709% ( 119) 00:17:14.764 7125.967 - 7154.585: 3.7221% ( 134) 00:17:14.764 7154.585 - 7183.203: 4.8336% ( 175) 00:17:14.764 7183.203 - 7211.822: 6.0912% ( 198) 00:17:14.764 7211.822 - 7240.440: 7.6092% ( 239) 00:17:14.764 7240.440 - 7269.059: 9.2416% ( 257) 00:17:14.764 7269.059 - 7297.677: 11.1852% ( 306) 00:17:14.764 7297.677 - 7326.295: 13.1606% ( 311) 00:17:14.764 7326.295 - 7383.532: 17.3717% ( 663) 00:17:14.764 7383.532 - 7440.769: 22.0084% ( 730) 00:17:14.764 7440.769 - 7498.005: 26.7276% ( 743) 00:17:14.764 7498.005 - 7555.242: 31.5358% ( 757) 00:17:14.764 7555.242 - 7612.479: 36.3694% ( 761) 00:17:14.764 7612.479 - 7669.715: 41.1395% ( 751) 00:17:14.764 7669.715 - 7726.952: 45.7889% ( 732) 00:17:14.764 7726.952 - 7784.189: 50.6034% ( 758) 00:17:14.764 7784.189 - 7841.425: 55.5196% ( 774) 00:17:14.764 7841.425 - 7898.662: 60.3341% ( 758) 00:17:14.764 7898.662 - 7955.899: 65.1105% ( 752) 00:17:14.764 7955.899 - 8013.135: 69.8298% ( 743) 00:17:14.764 8013.135 - 8070.372: 74.3775% ( 716) 00:17:14.764 8070.372 - 8127.609: 78.6712% ( 676) 00:17:14.764 8127.609 - 8184.845: 82.7236% ( 638) 00:17:14.764 8184.845 - 8242.082: 86.3694% ( 574) 00:17:14.764 8242.082 - 8299.319: 89.2975% ( 461) 00:17:14.764 8299.319 - 8356.555: 91.6540% ( 371) 00:17:14.764 8356.555 - 8413.792: 93.0958% ( 227) 00:17:14.764 8413.792 - 8471.029: 93.9787% ( 139) 00:17:14.764 8471.029 - 8528.266: 94.4296% ( 71) 00:17:14.764 8528.266 - 8585.502: 94.6710% ( 38) 00:17:14.764 8585.502 - 8642.739: 94.8044% ( 21) 00:17:14.764 8642.739 - 8699.976: 94.9123% ( 17) 00:17:14.764 8699.976 - 8757.212: 94.9886% ( 12) 00:17:14.764 8757.212 - 8814.449: 95.0521% ( 10) 00:17:14.764 8814.449 - 8871.686: 95.1283% ( 12) 00:17:14.764 8871.686 - 8928.922: 95.2236% ( 15) 00:17:14.764 8928.922 - 8986.159: 95.2934% ( 11) 00:17:14.764 8986.159 - 9043.396: 95.3951% ( 16) 00:17:14.764 9043.396 - 9100.632: 95.5094% ( 18) 00:17:14.764 9100.632 - 9157.869: 95.6110% ( 16) 00:17:14.764 9157.869 - 9215.106: 95.7063% ( 15) 00:17:14.764 9215.106 - 9272.342: 95.8079% ( 16) 00:17:14.764 9272.342 - 9329.579: 95.8905% ( 13) 00:17:14.764 9329.579 - 9386.816: 95.9540% ( 10) 00:17:14.764 9386.816 - 9444.052: 96.0175% ( 10) 00:17:14.764 9444.052 - 9501.289: 96.1001% ( 13) 00:17:14.764 9501.289 - 9558.526: 96.1573% ( 9) 00:17:14.764 9558.526 - 9615.762: 96.2208% ( 10) 00:17:14.764 9615.762 - 9672.999: 96.2843% ( 10) 00:17:14.764 9672.999 - 9730.236: 96.3478% ( 10) 00:17:14.764 9730.236 - 9787.472: 96.4240% ( 12) 00:17:14.764 9787.472 - 9844.709: 96.5257% ( 16) 00:17:14.764 9844.709 - 9901.946: 96.6146% ( 14) 00:17:14.764 9901.946 - 9959.183: 96.7099% ( 15) 00:17:14.764 9959.183 - 10016.419: 96.8051% ( 15) 00:17:14.764 10016.419 - 10073.656: 96.8877% ( 13) 00:17:14.764 10073.656 - 10130.893: 96.9957% ( 17) 00:17:14.764 10130.893 - 10188.129: 97.1037% ( 17) 00:17:14.764 10188.129 - 10245.366: 97.1989% ( 15) 00:17:14.764 10245.366 - 10302.603: 97.2752% ( 12) 00:17:14.764 10302.603 - 10359.839: 97.3323% ( 9) 00:17:14.764 10359.839 - 10417.076: 97.4022% ( 11) 00:17:14.764 10417.076 - 10474.313: 97.4911% ( 14) 00:17:14.764 10474.313 - 10531.549: 97.5546% ( 10) 00:17:14.764 10531.549 - 10588.786: 97.6245% ( 11) 00:17:14.764 10588.786 - 10646.023: 97.6944% ( 11) 00:17:14.764 10646.023 - 10703.259: 97.7642% ( 11) 00:17:14.764 10703.259 - 10760.496: 97.8532% ( 14) 00:17:14.764 10760.496 - 10817.733: 97.9167% ( 10) 00:17:14.764 10817.733 - 10874.969: 97.9929% ( 12) 00:17:14.764 10874.969 - 10932.206: 98.0564% ( 10) 00:17:14.764 10932.206 - 10989.443: 98.1199% ( 10) 00:17:14.764 10989.443 - 11046.679: 98.1580% ( 6) 00:17:14.764 11046.679 - 11103.916: 98.2088% ( 8) 00:17:14.764 11103.916 - 11161.153: 98.2470% ( 6) 00:17:14.764 11161.153 - 11218.390: 98.2660% ( 3) 00:17:14.764 11218.390 - 11275.626: 98.2851% ( 3) 00:17:14.764 11275.626 - 11332.863: 98.2978% ( 2) 00:17:14.764 11332.863 - 11390.100: 98.3168% ( 3) 00:17:14.764 11390.100 - 11447.336: 98.3295% ( 2) 00:17:14.764 11447.336 - 11504.573: 98.3486% ( 3) 00:17:14.764 11504.573 - 11561.810: 98.3676% ( 3) 00:17:14.764 11561.810 - 11619.046: 98.3740% ( 1) 00:17:14.764 13107.200 - 13164.437: 98.3803% ( 1) 00:17:14.764 13164.437 - 13221.673: 98.3994% ( 3) 00:17:14.764 13221.673 - 13278.910: 98.4184% ( 3) 00:17:14.764 13278.910 - 13336.147: 98.4375% ( 3) 00:17:14.764 13336.147 - 13393.383: 98.4566% ( 3) 00:17:14.764 13393.383 - 13450.620: 98.4756% ( 3) 00:17:14.764 13450.620 - 13507.857: 98.4947% ( 3) 00:17:14.764 13507.857 - 13565.093: 98.5137% ( 3) 00:17:14.764 13565.093 - 13622.330: 98.5328% ( 3) 00:17:14.764 13622.330 - 13679.567: 98.5518% ( 3) 00:17:14.764 13679.567 - 13736.803: 98.5709% ( 3) 00:17:14.764 13736.803 - 13794.040: 98.5899% ( 3) 00:17:14.764 13794.040 - 13851.277: 98.6026% ( 2) 00:17:14.764 13851.277 - 13908.514: 98.6217% ( 3) 00:17:14.764 13908.514 - 13965.750: 98.6535% ( 5) 00:17:14.764 13965.750 - 14022.987: 98.6916% ( 6) 00:17:14.764 14022.987 - 14080.224: 98.7297% ( 6) 00:17:14.764 14080.224 - 14137.460: 98.7614% ( 5) 00:17:14.764 14137.460 - 14194.697: 98.8059% ( 7) 00:17:14.764 14194.697 - 14251.934: 98.8377% ( 5) 00:17:14.764 14251.934 - 14309.170: 98.8694% ( 5) 00:17:14.764 14309.170 - 14366.407: 98.9012% ( 5) 00:17:14.764 14366.407 - 14423.644: 98.9202% ( 3) 00:17:14.764 14423.644 - 14480.880: 98.9456% ( 4) 00:17:14.764 14480.880 - 14538.117: 98.9647% ( 3) 00:17:14.764 14538.117 - 14595.354: 98.9837% ( 3) 00:17:14.764 14595.354 - 14652.590: 99.0028% ( 3) 00:17:14.764 14652.590 - 14767.064: 99.0346% ( 5) 00:17:14.764 14767.064 - 14881.537: 99.0790% ( 7) 00:17:14.764 14881.537 - 14996.010: 99.1044% ( 4) 00:17:14.764 14996.010 - 15110.484: 99.1489% ( 7) 00:17:14.764 15110.484 - 15224.957: 99.1870% ( 6) 00:17:14.764 36402.529 - 36631.476: 99.2124% ( 4) 00:17:14.764 36631.476 - 36860.423: 99.2569% ( 7) 00:17:14.764 36860.423 - 37089.369: 99.3013% ( 7) 00:17:14.764 37089.369 - 37318.316: 99.3458% ( 7) 00:17:14.764 37318.316 - 37547.263: 99.3839% ( 6) 00:17:14.764 37547.263 - 37776.210: 99.4347% ( 8) 00:17:14.764 37776.210 - 38005.156: 99.4855% ( 8) 00:17:14.764 38005.156 - 38234.103: 99.5300% ( 7) 00:17:14.765 38234.103 - 38463.050: 99.5744% ( 7) 00:17:14.765 38463.050 - 38691.997: 99.5935% ( 3) 00:17:14.765 43728.824 - 43957.771: 99.6189% ( 4) 00:17:14.765 43957.771 - 44186.718: 99.6697% ( 8) 00:17:14.765 44186.718 - 44415.665: 99.7078% ( 6) 00:17:14.765 44415.665 - 44644.611: 99.7523% ( 7) 00:17:14.765 44644.611 - 44873.558: 99.8031% ( 8) 00:17:14.765 44873.558 - 45102.505: 99.8476% ( 7) 00:17:14.765 45102.505 - 45331.452: 99.8984% ( 8) 00:17:14.765 45331.452 - 45560.398: 99.9428% ( 7) 00:17:14.765 45560.398 - 45789.345: 99.9936% ( 8) 00:17:14.765 45789.345 - 46018.292: 100.0000% ( 1) 00:17:14.765 00:17:14.765 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:17:14.765 ============================================================================== 00:17:14.765 Range in us Cumulative IO count 00:17:14.765 6925.638 - 6954.257: 0.0064% ( 1) 00:17:14.765 6954.257 - 6982.875: 0.0572% ( 8) 00:17:14.765 6982.875 - 7011.493: 0.1651% ( 17) 00:17:14.765 7011.493 - 7040.112: 0.3493% ( 29) 00:17:14.765 7040.112 - 7068.730: 0.5462% ( 31) 00:17:14.765 7068.730 - 7097.348: 0.8257% ( 44) 00:17:14.765 7097.348 - 7125.967: 1.1433% ( 50) 00:17:14.765 7125.967 - 7154.585: 1.5752% ( 68) 00:17:14.765 7154.585 - 7183.203: 2.1913% ( 97) 00:17:14.765 7183.203 - 7211.822: 2.9726% ( 123) 00:17:14.765 7211.822 - 7240.440: 3.9825% ( 159) 00:17:14.765 7240.440 - 7269.059: 5.2274% ( 196) 00:17:14.765 7269.059 - 7297.677: 6.5866% ( 214) 00:17:14.765 7297.677 - 7326.295: 8.3270% ( 274) 00:17:14.765 7326.295 - 7383.532: 12.1507% ( 602) 00:17:14.765 7383.532 - 7440.769: 17.0541% ( 772) 00:17:14.765 7440.769 - 7498.005: 22.2561% ( 819) 00:17:14.765 7498.005 - 7555.242: 28.0107% ( 906) 00:17:14.765 7555.242 - 7612.479: 33.7335% ( 901) 00:17:14.765 7612.479 - 7669.715: 39.6278% ( 928) 00:17:14.765 7669.715 - 7726.952: 45.6237% ( 944) 00:17:14.765 7726.952 - 7784.189: 51.5816% ( 938) 00:17:14.765 7784.189 - 7841.425: 57.5140% ( 934) 00:17:14.765 7841.425 - 7898.662: 63.3067% ( 912) 00:17:14.765 7898.662 - 7955.899: 68.7818% ( 862) 00:17:14.765 7955.899 - 8013.135: 74.0790% ( 834) 00:17:14.765 8013.135 - 8070.372: 78.9761% ( 771) 00:17:14.765 8070.372 - 8127.609: 83.5239% ( 716) 00:17:14.765 8127.609 - 8184.845: 87.1951% ( 578) 00:17:14.765 8184.845 - 8242.082: 90.1169% ( 460) 00:17:14.765 8242.082 - 8299.319: 92.1430% ( 319) 00:17:14.765 8299.319 - 8356.555: 93.3689% ( 193) 00:17:14.765 8356.555 - 8413.792: 94.1057% ( 116) 00:17:14.765 8413.792 - 8471.029: 94.4360% ( 52) 00:17:14.765 8471.029 - 8528.266: 94.6519% ( 34) 00:17:14.765 8528.266 - 8585.502: 94.7472% ( 15) 00:17:14.765 8585.502 - 8642.739: 94.8234% ( 12) 00:17:14.765 8642.739 - 8699.976: 94.9060% ( 13) 00:17:14.765 8699.976 - 8757.212: 94.9822% ( 12) 00:17:14.765 8757.212 - 8814.449: 95.0521% ( 11) 00:17:14.765 8814.449 - 8871.686: 95.1347% ( 13) 00:17:14.765 8871.686 - 8928.922: 95.2172% ( 13) 00:17:14.765 8928.922 - 8986.159: 95.2998% ( 13) 00:17:14.765 8986.159 - 9043.396: 95.3887% ( 14) 00:17:14.765 9043.396 - 9100.632: 95.4713% ( 13) 00:17:14.765 9100.632 - 9157.869: 95.5856% ( 18) 00:17:14.765 9157.869 - 9215.106: 95.6809% ( 15) 00:17:14.765 9215.106 - 9272.342: 95.7698% ( 14) 00:17:14.765 9272.342 - 9329.579: 95.8460% ( 12) 00:17:14.765 9329.579 - 9386.816: 95.9350% ( 14) 00:17:14.765 9386.816 - 9444.052: 96.0239% ( 14) 00:17:14.765 9444.052 - 9501.289: 96.1192% ( 15) 00:17:14.765 9501.289 - 9558.526: 96.1954% ( 12) 00:17:14.765 9558.526 - 9615.762: 96.2970% ( 16) 00:17:14.765 9615.762 - 9672.999: 96.3859% ( 14) 00:17:14.765 9672.999 - 9730.236: 96.4812% ( 15) 00:17:14.765 9730.236 - 9787.472: 96.5828% ( 16) 00:17:14.765 9787.472 - 9844.709: 96.6717% ( 14) 00:17:14.765 9844.709 - 9901.946: 96.7607% ( 14) 00:17:14.765 9901.946 - 9959.183: 96.8559% ( 15) 00:17:14.765 9959.183 - 10016.419: 96.9322% ( 12) 00:17:14.765 10016.419 - 10073.656: 96.9703% ( 6) 00:17:14.765 10073.656 - 10130.893: 97.0147% ( 7) 00:17:14.765 10130.893 - 10188.129: 97.0655% ( 8) 00:17:14.765 10188.129 - 10245.366: 97.1354% ( 11) 00:17:14.765 10245.366 - 10302.603: 97.1799% ( 7) 00:17:14.765 10302.603 - 10359.839: 97.2434% ( 10) 00:17:14.765 10359.839 - 10417.076: 97.3260% ( 13) 00:17:14.765 10417.076 - 10474.313: 97.3831% ( 9) 00:17:14.765 10474.313 - 10531.549: 97.4212% ( 6) 00:17:14.765 10531.549 - 10588.786: 97.4593% ( 6) 00:17:14.765 10588.786 - 10646.023: 97.4848% ( 4) 00:17:14.765 10646.023 - 10703.259: 97.5229% ( 6) 00:17:14.765 10703.259 - 10760.496: 97.5546% ( 5) 00:17:14.765 10760.496 - 10817.733: 97.5927% ( 6) 00:17:14.765 10817.733 - 10874.969: 97.6308% ( 6) 00:17:14.765 10874.969 - 10932.206: 97.7007% ( 11) 00:17:14.765 10932.206 - 10989.443: 97.7642% ( 10) 00:17:14.765 10989.443 - 11046.679: 97.8277% ( 10) 00:17:14.765 11046.679 - 11103.916: 97.8849% ( 9) 00:17:14.765 11103.916 - 11161.153: 97.9548% ( 11) 00:17:14.765 11161.153 - 11218.390: 98.0246% ( 11) 00:17:14.765 11218.390 - 11275.626: 98.0882% ( 10) 00:17:14.765 11275.626 - 11332.863: 98.1517% ( 10) 00:17:14.765 11332.863 - 11390.100: 98.2152% ( 10) 00:17:14.765 11390.100 - 11447.336: 98.2470% ( 5) 00:17:14.765 11447.336 - 11504.573: 98.2660% ( 3) 00:17:14.765 11504.573 - 11561.810: 98.2914% ( 4) 00:17:14.765 11561.810 - 11619.046: 98.3168% ( 4) 00:17:14.765 11619.046 - 11676.283: 98.3422% ( 4) 00:17:14.765 11676.283 - 11733.520: 98.3676% ( 4) 00:17:14.765 11733.520 - 11790.756: 98.3740% ( 1) 00:17:14.765 12878.253 - 12935.490: 98.3930% ( 3) 00:17:14.765 12935.490 - 12992.727: 98.4121% ( 3) 00:17:14.765 12992.727 - 13049.963: 98.4375% ( 4) 00:17:14.765 13049.963 - 13107.200: 98.4566% ( 3) 00:17:14.765 13107.200 - 13164.437: 98.4820% ( 4) 00:17:14.765 13164.437 - 13221.673: 98.5010% ( 3) 00:17:14.765 13221.673 - 13278.910: 98.5264% ( 4) 00:17:14.765 13278.910 - 13336.147: 98.5455% ( 3) 00:17:14.765 13336.147 - 13393.383: 98.5709% ( 4) 00:17:14.765 13393.383 - 13450.620: 98.5899% ( 3) 00:17:14.765 13450.620 - 13507.857: 98.6090% ( 3) 00:17:14.765 13507.857 - 13565.093: 98.6344% ( 4) 00:17:14.765 13565.093 - 13622.330: 98.6535% ( 3) 00:17:14.765 13622.330 - 13679.567: 98.6725% ( 3) 00:17:14.765 13679.567 - 13736.803: 98.6979% ( 4) 00:17:14.765 13736.803 - 13794.040: 98.7170% ( 3) 00:17:14.765 13794.040 - 13851.277: 98.7424% ( 4) 00:17:14.765 13851.277 - 13908.514: 98.7614% ( 3) 00:17:14.765 13908.514 - 13965.750: 98.7805% ( 3) 00:17:14.765 14251.934 - 14309.170: 98.7868% ( 1) 00:17:14.765 14309.170 - 14366.407: 98.8122% ( 4) 00:17:14.765 14366.407 - 14423.644: 98.8313% ( 3) 00:17:14.765 14423.644 - 14480.880: 98.8567% ( 4) 00:17:14.765 14480.880 - 14538.117: 98.8758% ( 3) 00:17:14.765 14538.117 - 14595.354: 98.9012% ( 4) 00:17:14.765 14595.354 - 14652.590: 98.9202% ( 3) 00:17:14.765 14652.590 - 14767.064: 98.9647% ( 7) 00:17:14.765 14767.064 - 14881.537: 99.0091% ( 7) 00:17:14.765 14881.537 - 14996.010: 99.0473% ( 6) 00:17:14.765 14996.010 - 15110.484: 99.0917% ( 7) 00:17:14.765 15110.484 - 15224.957: 99.1298% ( 6) 00:17:14.765 15224.957 - 15339.431: 99.1743% ( 7) 00:17:14.765 15339.431 - 15453.904: 99.1870% ( 2) 00:17:14.765 34570.955 - 34799.902: 99.2315% ( 7) 00:17:14.765 34799.902 - 35028.849: 99.2823% ( 8) 00:17:14.765 35028.849 - 35257.796: 99.3331% ( 8) 00:17:14.765 35257.796 - 35486.742: 99.3775% ( 7) 00:17:14.765 35486.742 - 35715.689: 99.4284% ( 8) 00:17:14.765 35715.689 - 35944.636: 99.4728% ( 7) 00:17:14.765 35944.636 - 36173.583: 99.5236% ( 8) 00:17:14.765 36173.583 - 36402.529: 99.5744% ( 8) 00:17:14.765 36402.529 - 36631.476: 99.5935% ( 3) 00:17:14.765 41439.357 - 41668.304: 99.5998% ( 1) 00:17:14.765 41668.304 - 41897.251: 99.6507% ( 8) 00:17:14.765 41897.251 - 42126.197: 99.6951% ( 7) 00:17:14.765 42126.197 - 42355.144: 99.7459% ( 8) 00:17:14.765 42355.144 - 42584.091: 99.7904% ( 7) 00:17:14.765 42584.091 - 42813.038: 99.8412% ( 8) 00:17:14.765 42813.038 - 43041.984: 99.8984% ( 9) 00:17:14.765 43041.984 - 43270.931: 99.9428% ( 7) 00:17:14.765 43270.931 - 43499.878: 99.9936% ( 8) 00:17:14.765 43499.878 - 43728.824: 100.0000% ( 1) 00:17:14.765 00:17:14.765 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:17:14.765 ============================================================================== 00:17:14.765 Range in us Cumulative IO count 00:17:14.765 6925.638 - 6954.257: 0.0127% ( 2) 00:17:14.765 6954.257 - 6982.875: 0.0635% ( 8) 00:17:14.765 6982.875 - 7011.493: 0.1651% ( 16) 00:17:14.765 7011.493 - 7040.112: 0.2922% ( 20) 00:17:14.765 7040.112 - 7068.730: 0.5526% ( 41) 00:17:14.765 7068.730 - 7097.348: 0.8702% ( 50) 00:17:14.765 7097.348 - 7125.967: 1.2449% ( 59) 00:17:14.765 7125.967 - 7154.585: 1.7721% ( 83) 00:17:14.765 7154.585 - 7183.203: 2.4009% ( 99) 00:17:14.765 7183.203 - 7211.822: 3.1885% ( 124) 00:17:14.765 7211.822 - 7240.440: 4.1857% ( 157) 00:17:14.765 7240.440 - 7269.059: 5.3417% ( 182) 00:17:14.765 7269.059 - 7297.677: 6.7645% ( 224) 00:17:14.765 7297.677 - 7326.295: 8.4477% ( 265) 00:17:14.765 7326.295 - 7383.532: 12.2523% ( 599) 00:17:14.765 7383.532 - 7440.769: 16.8890% ( 730) 00:17:14.766 7440.769 - 7498.005: 22.1608% ( 830) 00:17:14.766 7498.005 - 7555.242: 27.8773% ( 900) 00:17:14.766 7555.242 - 7612.479: 33.8097% ( 934) 00:17:14.766 7612.479 - 7669.715: 39.7040% ( 928) 00:17:14.766 7669.715 - 7726.952: 45.7063% ( 945) 00:17:14.766 7726.952 - 7784.189: 51.6451% ( 935) 00:17:14.766 7784.189 - 7841.425: 57.5902% ( 936) 00:17:14.766 7841.425 - 7898.662: 63.3130% ( 901) 00:17:14.766 7898.662 - 7955.899: 68.9787% ( 892) 00:17:14.766 7955.899 - 8013.135: 74.1425% ( 813) 00:17:14.766 8013.135 - 8070.372: 78.9634% ( 759) 00:17:14.766 8070.372 - 8127.609: 83.3524% ( 691) 00:17:14.766 8127.609 - 8184.845: 87.1761% ( 602) 00:17:14.766 8184.845 - 8242.082: 90.1550% ( 469) 00:17:14.766 8242.082 - 8299.319: 92.2637% ( 332) 00:17:14.766 8299.319 - 8356.555: 93.5658% ( 205) 00:17:14.766 8356.555 - 8413.792: 94.3471% ( 123) 00:17:14.766 8413.792 - 8471.029: 94.6710% ( 51) 00:17:14.766 8471.029 - 8528.266: 94.8107% ( 22) 00:17:14.766 8528.266 - 8585.502: 94.9251% ( 18) 00:17:14.766 8585.502 - 8642.739: 94.9949% ( 11) 00:17:14.766 8642.739 - 8699.976: 95.0838% ( 14) 00:17:14.766 8699.976 - 8757.212: 95.1664% ( 13) 00:17:14.766 8757.212 - 8814.449: 95.2490% ( 13) 00:17:14.766 8814.449 - 8871.686: 95.3316% ( 13) 00:17:14.766 8871.686 - 8928.922: 95.4141% ( 13) 00:17:14.766 8928.922 - 8986.159: 95.5094% ( 15) 00:17:14.766 8986.159 - 9043.396: 95.5920% ( 13) 00:17:14.766 9043.396 - 9100.632: 95.6809% ( 14) 00:17:14.766 9100.632 - 9157.869: 95.7698% ( 14) 00:17:14.766 9157.869 - 9215.106: 95.8460% ( 12) 00:17:14.766 9215.106 - 9272.342: 95.9350% ( 14) 00:17:14.766 9272.342 - 9329.579: 96.0175% ( 13) 00:17:14.766 9329.579 - 9386.816: 96.1065% ( 14) 00:17:14.766 9386.816 - 9444.052: 96.2208% ( 18) 00:17:14.766 9444.052 - 9501.289: 96.3288% ( 17) 00:17:14.766 9501.289 - 9558.526: 96.4240% ( 15) 00:17:14.766 9558.526 - 9615.762: 96.5130% ( 14) 00:17:14.766 9615.762 - 9672.999: 96.6082% ( 15) 00:17:14.766 9672.999 - 9730.236: 96.6908% ( 13) 00:17:14.766 9730.236 - 9787.472: 96.7543% ( 10) 00:17:14.766 9787.472 - 9844.709: 96.8432% ( 14) 00:17:14.766 9844.709 - 9901.946: 96.9385% ( 15) 00:17:14.766 9901.946 - 9959.183: 97.0401% ( 16) 00:17:14.766 9959.183 - 10016.419: 97.1481% ( 17) 00:17:14.766 10016.419 - 10073.656: 97.2434% ( 15) 00:17:14.766 10073.656 - 10130.893: 97.3069% ( 10) 00:17:14.766 10130.893 - 10188.129: 97.3641% ( 9) 00:17:14.766 10188.129 - 10245.366: 97.4022% ( 6) 00:17:14.766 10245.366 - 10302.603: 97.4339% ( 5) 00:17:14.766 10302.603 - 10359.839: 97.4721% ( 6) 00:17:14.766 10359.839 - 10417.076: 97.5102% ( 6) 00:17:14.766 10417.076 - 10474.313: 97.5483% ( 6) 00:17:14.766 10474.313 - 10531.549: 97.5927% ( 7) 00:17:14.766 10531.549 - 10588.786: 97.6308% ( 6) 00:17:14.766 10588.786 - 10646.023: 97.6690% ( 6) 00:17:14.766 10646.023 - 10703.259: 97.7071% ( 6) 00:17:14.766 10703.259 - 10760.496: 97.7452% ( 6) 00:17:14.766 10760.496 - 10817.733: 97.7769% ( 5) 00:17:14.766 10817.733 - 10874.969: 97.8150% ( 6) 00:17:14.766 10874.969 - 10932.206: 97.8532% ( 6) 00:17:14.766 10932.206 - 10989.443: 97.8976% ( 7) 00:17:14.766 10989.443 - 11046.679: 97.9230% ( 4) 00:17:14.766 11046.679 - 11103.916: 97.9357% ( 2) 00:17:14.766 11103.916 - 11161.153: 97.9548% ( 3) 00:17:14.766 11161.153 - 11218.390: 97.9675% ( 2) 00:17:14.766 11962.466 - 12019.703: 97.9738% ( 1) 00:17:14.766 12019.703 - 12076.940: 97.9929% ( 3) 00:17:14.766 12076.940 - 12134.176: 98.0119% ( 3) 00:17:14.766 12134.176 - 12191.413: 98.0628% ( 8) 00:17:14.766 12191.413 - 12248.650: 98.1199% ( 9) 00:17:14.766 12248.650 - 12305.886: 98.1771% ( 9) 00:17:14.766 12305.886 - 12363.123: 98.2279% ( 8) 00:17:14.766 12363.123 - 12420.360: 98.2724% ( 7) 00:17:14.766 12420.360 - 12477.597: 98.3168% ( 7) 00:17:14.766 12477.597 - 12534.833: 98.3613% ( 7) 00:17:14.766 12534.833 - 12592.070: 98.4184% ( 9) 00:17:14.766 12592.070 - 12649.307: 98.4629% ( 7) 00:17:14.766 12649.307 - 12706.543: 98.5137% ( 8) 00:17:14.766 12706.543 - 12763.780: 98.5582% ( 7) 00:17:14.766 12763.780 - 12821.017: 98.6090% ( 8) 00:17:14.766 12821.017 - 12878.253: 98.6535% ( 7) 00:17:14.766 12878.253 - 12935.490: 98.7043% ( 8) 00:17:14.766 12935.490 - 12992.727: 98.7487% ( 7) 00:17:14.766 12992.727 - 13049.963: 98.7741% ( 4) 00:17:14.766 13049.963 - 13107.200: 98.7805% ( 1) 00:17:14.766 14767.064 - 14881.537: 98.8122% ( 5) 00:17:14.766 14881.537 - 14996.010: 98.8567% ( 7) 00:17:14.766 14996.010 - 15110.484: 98.9012% ( 7) 00:17:14.766 15110.484 - 15224.957: 98.9456% ( 7) 00:17:14.766 15224.957 - 15339.431: 98.9901% ( 7) 00:17:14.766 15339.431 - 15453.904: 99.0346% ( 7) 00:17:14.766 15453.904 - 15568.377: 99.0790% ( 7) 00:17:14.766 15568.377 - 15682.851: 99.1171% ( 6) 00:17:14.766 15682.851 - 15797.324: 99.1679% ( 8) 00:17:14.766 15797.324 - 15911.797: 99.1870% ( 3) 00:17:14.766 33197.275 - 33426.222: 99.2315% ( 7) 00:17:14.766 33426.222 - 33655.169: 99.2823% ( 8) 00:17:14.766 33655.169 - 33884.115: 99.3267% ( 7) 00:17:14.766 33884.115 - 34113.062: 99.3775% ( 8) 00:17:14.766 34113.062 - 34342.009: 99.4284% ( 8) 00:17:14.766 34342.009 - 34570.955: 99.4792% ( 8) 00:17:14.766 34570.955 - 34799.902: 99.5300% ( 8) 00:17:14.766 34799.902 - 35028.849: 99.5808% ( 8) 00:17:14.766 35028.849 - 35257.796: 99.5935% ( 2) 00:17:14.766 39836.730 - 40065.677: 99.6253% ( 5) 00:17:14.766 40065.677 - 40294.624: 99.6761% ( 8) 00:17:14.766 40294.624 - 40523.570: 99.7205% ( 7) 00:17:14.766 40523.570 - 40752.517: 99.7777% ( 9) 00:17:14.766 40752.517 - 40981.464: 99.8285% ( 8) 00:17:14.766 40981.464 - 41210.410: 99.8793% ( 8) 00:17:14.766 41210.410 - 41439.357: 99.9238% ( 7) 00:17:14.766 41439.357 - 41668.304: 99.9746% ( 8) 00:17:14.766 41668.304 - 41897.251: 100.0000% ( 4) 00:17:14.766 00:17:14.766 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:17:14.766 ============================================================================== 00:17:14.766 Range in us Cumulative IO count 00:17:14.766 6925.638 - 6954.257: 0.0064% ( 1) 00:17:14.766 6954.257 - 6982.875: 0.0445% ( 6) 00:17:14.766 6982.875 - 7011.493: 0.1270% ( 13) 00:17:14.766 7011.493 - 7040.112: 0.2668% ( 22) 00:17:14.766 7040.112 - 7068.730: 0.4637% ( 31) 00:17:14.766 7068.730 - 7097.348: 0.7685% ( 48) 00:17:14.766 7097.348 - 7125.967: 1.1751% ( 64) 00:17:14.766 7125.967 - 7154.585: 1.7975% ( 98) 00:17:14.766 7154.585 - 7183.203: 2.4073% ( 96) 00:17:14.766 7183.203 - 7211.822: 3.1695% ( 120) 00:17:14.766 7211.822 - 7240.440: 4.1286% ( 151) 00:17:14.766 7240.440 - 7269.059: 5.2782% ( 181) 00:17:14.766 7269.059 - 7297.677: 6.6311% ( 213) 00:17:14.766 7297.677 - 7326.295: 8.3079% ( 264) 00:17:14.766 7326.295 - 7383.532: 12.0998% ( 597) 00:17:14.766 7383.532 - 7440.769: 16.9398% ( 762) 00:17:14.766 7440.769 - 7498.005: 22.3641% ( 854) 00:17:14.766 7498.005 - 7555.242: 27.9281% ( 876) 00:17:14.766 7555.242 - 7612.479: 33.8034% ( 925) 00:17:14.766 7612.479 - 7669.715: 39.7675% ( 939) 00:17:14.766 7669.715 - 7726.952: 45.6491% ( 926) 00:17:14.766 7726.952 - 7784.189: 51.5752% ( 933) 00:17:14.766 7784.189 - 7841.425: 57.4187% ( 920) 00:17:14.766 7841.425 - 7898.662: 63.0843% ( 892) 00:17:14.766 7898.662 - 7955.899: 68.7691% ( 895) 00:17:14.766 7955.899 - 8013.135: 74.0473% ( 831) 00:17:14.766 8013.135 - 8070.372: 78.8935% ( 763) 00:17:14.766 8070.372 - 8127.609: 83.2254% ( 682) 00:17:14.766 8127.609 - 8184.845: 86.9411% ( 585) 00:17:14.766 8184.845 - 8242.082: 89.9200% ( 469) 00:17:14.766 8242.082 - 8299.319: 92.0986% ( 343) 00:17:14.766 8299.319 - 8356.555: 93.2800% ( 186) 00:17:14.766 8356.555 - 8413.792: 93.9596% ( 107) 00:17:14.766 8413.792 - 8471.029: 94.2899% ( 52) 00:17:14.766 8471.029 - 8528.266: 94.4423% ( 24) 00:17:14.766 8528.266 - 8585.502: 94.5694% ( 20) 00:17:14.766 8585.502 - 8642.739: 94.6837% ( 18) 00:17:14.766 8642.739 - 8699.976: 94.7917% ( 17) 00:17:14.766 8699.976 - 8757.212: 94.9060% ( 18) 00:17:14.766 8757.212 - 8814.449: 95.0203% ( 18) 00:17:14.766 8814.449 - 8871.686: 95.1220% ( 16) 00:17:14.766 8871.686 - 8928.922: 95.2426% ( 19) 00:17:14.766 8928.922 - 8986.159: 95.3633% ( 19) 00:17:14.766 8986.159 - 9043.396: 95.4967% ( 21) 00:17:14.766 9043.396 - 9100.632: 95.6301% ( 21) 00:17:14.766 9100.632 - 9157.869: 95.7635% ( 21) 00:17:14.766 9157.869 - 9215.106: 95.8778% ( 18) 00:17:14.766 9215.106 - 9272.342: 95.9794% ( 16) 00:17:14.766 9272.342 - 9329.579: 96.1001% ( 19) 00:17:14.766 9329.579 - 9386.816: 96.2081% ( 17) 00:17:14.766 9386.816 - 9444.052: 96.3161% ( 17) 00:17:14.766 9444.052 - 9501.289: 96.4367% ( 19) 00:17:14.766 9501.289 - 9558.526: 96.5511% ( 18) 00:17:14.766 9558.526 - 9615.762: 96.6527% ( 16) 00:17:14.766 9615.762 - 9672.999: 96.7607% ( 17) 00:17:14.766 9672.999 - 9730.236: 96.8814% ( 19) 00:17:14.766 9730.236 - 9787.472: 96.9830% ( 16) 00:17:14.766 9787.472 - 9844.709: 97.0655% ( 13) 00:17:14.766 9844.709 - 9901.946: 97.1418% ( 12) 00:17:14.766 9901.946 - 9959.183: 97.1989% ( 9) 00:17:14.766 9959.183 - 10016.419: 97.2561% ( 9) 00:17:14.766 10016.419 - 10073.656: 97.3069% ( 8) 00:17:14.766 10073.656 - 10130.893: 97.3387% ( 5) 00:17:14.766 10130.893 - 10188.129: 97.3577% ( 3) 00:17:14.766 10188.129 - 10245.366: 97.4212% ( 10) 00:17:14.766 10245.366 - 10302.603: 97.4403% ( 3) 00:17:14.767 10302.603 - 10359.839: 97.4784% ( 6) 00:17:14.767 10359.839 - 10417.076: 97.5165% ( 6) 00:17:14.767 10417.076 - 10474.313: 97.5546% ( 6) 00:17:14.767 10474.313 - 10531.549: 97.5927% ( 6) 00:17:14.767 10531.549 - 10588.786: 97.6308% ( 6) 00:17:14.767 10588.786 - 10646.023: 97.6753% ( 7) 00:17:14.767 10646.023 - 10703.259: 97.7007% ( 4) 00:17:14.767 10703.259 - 10760.496: 97.7388% ( 6) 00:17:14.767 10760.496 - 10817.733: 97.7706% ( 5) 00:17:14.767 10817.733 - 10874.969: 97.7833% ( 2) 00:17:14.767 10874.969 - 10932.206: 97.8023% ( 3) 00:17:14.767 10932.206 - 10989.443: 97.8214% ( 3) 00:17:14.767 10989.443 - 11046.679: 97.8404% ( 3) 00:17:14.767 11046.679 - 11103.916: 97.8595% ( 3) 00:17:14.767 11103.916 - 11161.153: 97.8722% ( 2) 00:17:14.767 11161.153 - 11218.390: 97.8976% ( 4) 00:17:14.767 11218.390 - 11275.626: 97.9103% ( 2) 00:17:14.767 11275.626 - 11332.863: 97.9294% ( 3) 00:17:14.767 11332.863 - 11390.100: 97.9484% ( 3) 00:17:14.767 11390.100 - 11447.336: 97.9675% ( 3) 00:17:14.767 11504.573 - 11561.810: 97.9865% ( 3) 00:17:14.767 11561.810 - 11619.046: 98.0056% ( 3) 00:17:14.767 11619.046 - 11676.283: 98.0310% ( 4) 00:17:14.767 11676.283 - 11733.520: 98.0501% ( 3) 00:17:14.767 11733.520 - 11790.756: 98.0755% ( 4) 00:17:14.767 11790.756 - 11847.993: 98.1009% ( 4) 00:17:14.767 11847.993 - 11905.230: 98.1263% ( 4) 00:17:14.767 11905.230 - 11962.466: 98.1453% ( 3) 00:17:14.767 11962.466 - 12019.703: 98.1644% ( 3) 00:17:14.767 12019.703 - 12076.940: 98.1834% ( 3) 00:17:14.767 12076.940 - 12134.176: 98.2025% ( 3) 00:17:14.767 12134.176 - 12191.413: 98.2279% ( 4) 00:17:14.767 12191.413 - 12248.650: 98.2533% ( 4) 00:17:14.767 12248.650 - 12305.886: 98.2724% ( 3) 00:17:14.767 12305.886 - 12363.123: 98.2978% ( 4) 00:17:14.767 12363.123 - 12420.360: 98.3168% ( 3) 00:17:14.767 12420.360 - 12477.597: 98.3422% ( 4) 00:17:14.767 12477.597 - 12534.833: 98.3613% ( 3) 00:17:14.767 12534.833 - 12592.070: 98.3740% ( 2) 00:17:14.767 12706.543 - 12763.780: 98.3994% ( 4) 00:17:14.767 12763.780 - 12821.017: 98.4248% ( 4) 00:17:14.767 12821.017 - 12878.253: 98.4439% ( 3) 00:17:14.767 12878.253 - 12935.490: 98.4756% ( 5) 00:17:14.767 12935.490 - 12992.727: 98.5010% ( 4) 00:17:14.767 12992.727 - 13049.963: 98.5264% ( 4) 00:17:14.767 13049.963 - 13107.200: 98.5582% ( 5) 00:17:14.767 13107.200 - 13164.437: 98.5836% ( 4) 00:17:14.767 13164.437 - 13221.673: 98.6026% ( 3) 00:17:14.767 13221.673 - 13278.910: 98.6280% ( 4) 00:17:14.767 13278.910 - 13336.147: 98.6598% ( 5) 00:17:14.767 13336.147 - 13393.383: 98.6852% ( 4) 00:17:14.767 13393.383 - 13450.620: 98.7106% ( 4) 00:17:14.767 13450.620 - 13507.857: 98.7297% ( 3) 00:17:14.767 13507.857 - 13565.093: 98.7551% ( 4) 00:17:14.767 13565.093 - 13622.330: 98.7741% ( 3) 00:17:14.767 13622.330 - 13679.567: 98.7805% ( 1) 00:17:14.767 14480.880 - 14538.117: 98.7995% ( 3) 00:17:14.767 14538.117 - 14595.354: 98.8186% ( 3) 00:17:14.767 14595.354 - 14652.590: 98.8504% ( 5) 00:17:14.767 14652.590 - 14767.064: 98.8885% ( 6) 00:17:14.767 14767.064 - 14881.537: 98.9329% ( 7) 00:17:14.767 14881.537 - 14996.010: 98.9774% ( 7) 00:17:14.767 14996.010 - 15110.484: 99.0282% ( 8) 00:17:14.767 15110.484 - 15224.957: 99.0663% ( 6) 00:17:14.767 15224.957 - 15339.431: 99.1171% ( 8) 00:17:14.767 15339.431 - 15453.904: 99.1616% ( 7) 00:17:14.767 15453.904 - 15568.377: 99.1870% ( 4) 00:17:14.767 31365.701 - 31594.648: 99.2442% ( 9) 00:17:14.767 31594.648 - 31823.595: 99.2950% ( 8) 00:17:14.767 31823.595 - 32052.541: 99.3458% ( 8) 00:17:14.767 32052.541 - 32281.488: 99.3966% ( 8) 00:17:14.767 32281.488 - 32510.435: 99.4474% ( 8) 00:17:14.767 32510.435 - 32739.382: 99.5046% ( 9) 00:17:14.767 32739.382 - 32968.328: 99.5490% ( 7) 00:17:14.767 32968.328 - 33197.275: 99.5935% ( 7) 00:17:14.767 37547.263 - 37776.210: 99.5998% ( 1) 00:17:14.767 37776.210 - 38005.156: 99.6507% ( 8) 00:17:14.767 38005.156 - 38234.103: 99.7078% ( 9) 00:17:14.767 38234.103 - 38463.050: 99.7586% ( 8) 00:17:14.767 38463.050 - 38691.997: 99.8031% ( 7) 00:17:14.767 38691.997 - 38920.943: 99.8539% ( 8) 00:17:14.767 38920.943 - 39149.890: 99.8984% ( 7) 00:17:14.767 39149.890 - 39378.837: 99.9492% ( 8) 00:17:14.767 39378.837 - 39607.783: 99.9936% ( 7) 00:17:14.767 39607.783 - 39836.730: 100.0000% ( 1) 00:17:14.767 00:17:14.767 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:17:14.767 ============================================================================== 00:17:14.767 Range in us Cumulative IO count 00:17:14.767 6897.020 - 6925.638: 0.0064% ( 1) 00:17:14.767 6925.638 - 6954.257: 0.0254% ( 3) 00:17:14.767 6954.257 - 6982.875: 0.0572% ( 5) 00:17:14.767 6982.875 - 7011.493: 0.1524% ( 15) 00:17:14.767 7011.493 - 7040.112: 0.2477% ( 15) 00:17:14.767 7040.112 - 7068.730: 0.4637% ( 34) 00:17:14.767 7068.730 - 7097.348: 0.7940% ( 52) 00:17:14.767 7097.348 - 7125.967: 1.2703% ( 75) 00:17:14.767 7125.967 - 7154.585: 1.8674% ( 94) 00:17:14.767 7154.585 - 7183.203: 2.4454% ( 91) 00:17:14.767 7183.203 - 7211.822: 3.2457% ( 126) 00:17:14.767 7211.822 - 7240.440: 4.2365% ( 156) 00:17:14.767 7240.440 - 7269.059: 5.3290% ( 172) 00:17:14.767 7269.059 - 7297.677: 6.5993% ( 200) 00:17:14.767 7297.677 - 7326.295: 8.1999% ( 252) 00:17:14.767 7326.295 - 7383.532: 12.2269% ( 634) 00:17:14.767 7383.532 - 7440.769: 16.9017% ( 736) 00:17:14.767 7440.769 - 7498.005: 22.3006% ( 850) 00:17:14.767 7498.005 - 7555.242: 27.8582% ( 875) 00:17:14.767 7555.242 - 7612.479: 33.7589% ( 929) 00:17:14.767 7612.479 - 7669.715: 39.7421% ( 942) 00:17:14.767 7669.715 - 7726.952: 45.5920% ( 921) 00:17:14.767 7726.952 - 7784.189: 51.4545% ( 923) 00:17:14.767 7784.189 - 7841.425: 57.3044% ( 921) 00:17:14.767 7841.425 - 7898.662: 63.0907% ( 911) 00:17:14.767 7898.662 - 7955.899: 68.6801% ( 880) 00:17:14.767 7955.899 - 8013.135: 73.9774% ( 834) 00:17:14.767 8013.135 - 8070.372: 78.7792% ( 756) 00:17:14.767 8070.372 - 8127.609: 83.2381% ( 702) 00:17:14.767 8127.609 - 8184.845: 87.0490% ( 600) 00:17:14.767 8184.845 - 8242.082: 90.0851% ( 478) 00:17:14.767 8242.082 - 8299.319: 92.2002% ( 333) 00:17:14.767 8299.319 - 8356.555: 93.4578% ( 198) 00:17:14.767 8356.555 - 8413.792: 94.0549% ( 94) 00:17:14.767 8413.792 - 8471.029: 94.3979% ( 54) 00:17:14.767 8471.029 - 8528.266: 94.5567% ( 25) 00:17:14.767 8528.266 - 8585.502: 94.6773% ( 19) 00:17:14.767 8585.502 - 8642.739: 94.7663% ( 14) 00:17:14.767 8642.739 - 8699.976: 94.8679% ( 16) 00:17:14.767 8699.976 - 8757.212: 94.9632% ( 15) 00:17:14.767 8757.212 - 8814.449: 95.0648% ( 16) 00:17:14.767 8814.449 - 8871.686: 95.1664% ( 16) 00:17:14.767 8871.686 - 8928.922: 95.2490% ( 13) 00:17:14.767 8928.922 - 8986.159: 95.3316% ( 13) 00:17:14.767 8986.159 - 9043.396: 95.4205% ( 14) 00:17:14.767 9043.396 - 9100.632: 95.5094% ( 14) 00:17:14.767 9100.632 - 9157.869: 95.5983% ( 14) 00:17:14.767 9157.869 - 9215.106: 95.6936% ( 15) 00:17:14.767 9215.106 - 9272.342: 95.7762% ( 13) 00:17:14.767 9272.342 - 9329.579: 95.8524% ( 12) 00:17:14.767 9329.579 - 9386.816: 95.9413% ( 14) 00:17:14.767 9386.816 - 9444.052: 96.0302% ( 14) 00:17:14.767 9444.052 - 9501.289: 96.1382% ( 17) 00:17:14.767 9501.289 - 9558.526: 96.2208% ( 13) 00:17:14.767 9558.526 - 9615.762: 96.3288% ( 17) 00:17:14.767 9615.762 - 9672.999: 96.4494% ( 19) 00:17:14.767 9672.999 - 9730.236: 96.5511% ( 16) 00:17:14.767 9730.236 - 9787.472: 96.6400% ( 14) 00:17:14.767 9787.472 - 9844.709: 96.7289% ( 14) 00:17:14.767 9844.709 - 9901.946: 96.8305% ( 16) 00:17:14.767 9901.946 - 9959.183: 96.9004% ( 11) 00:17:14.767 9959.183 - 10016.419: 96.9893% ( 14) 00:17:14.767 10016.419 - 10073.656: 97.0783% ( 14) 00:17:14.767 10073.656 - 10130.893: 97.1418% ( 10) 00:17:14.767 10130.893 - 10188.129: 97.2053% ( 10) 00:17:14.767 10188.129 - 10245.366: 97.2688% ( 10) 00:17:14.767 10245.366 - 10302.603: 97.3260% ( 9) 00:17:14.767 10302.603 - 10359.839: 97.3831% ( 9) 00:17:14.767 10359.839 - 10417.076: 97.4593% ( 12) 00:17:14.767 10417.076 - 10474.313: 97.5102% ( 8) 00:17:14.767 10474.313 - 10531.549: 97.5546% ( 7) 00:17:14.767 10531.549 - 10588.786: 97.5927% ( 6) 00:17:14.767 10588.786 - 10646.023: 97.6245% ( 5) 00:17:14.767 10646.023 - 10703.259: 97.6690% ( 7) 00:17:14.767 10703.259 - 10760.496: 97.7071% ( 6) 00:17:14.767 10760.496 - 10817.733: 97.7388% ( 5) 00:17:14.767 10817.733 - 10874.969: 97.7706% ( 5) 00:17:14.767 10874.969 - 10932.206: 97.8214% ( 8) 00:17:14.767 10932.206 - 10989.443: 97.8659% ( 7) 00:17:14.767 10989.443 - 11046.679: 97.8976% ( 5) 00:17:14.767 11046.679 - 11103.916: 97.9357% ( 6) 00:17:14.767 11103.916 - 11161.153: 97.9738% ( 6) 00:17:14.767 11161.153 - 11218.390: 97.9992% ( 4) 00:17:14.767 11218.390 - 11275.626: 98.0373% ( 6) 00:17:14.767 11275.626 - 11332.863: 98.0691% ( 5) 00:17:14.767 11332.863 - 11390.100: 98.0945% ( 4) 00:17:14.767 11390.100 - 11447.336: 98.1326% ( 6) 00:17:14.767 11447.336 - 11504.573: 98.1707% ( 6) 00:17:14.767 11504.573 - 11561.810: 98.2088% ( 6) 00:17:14.767 11561.810 - 11619.046: 98.2470% ( 6) 00:17:14.767 11619.046 - 11676.283: 98.2787% ( 5) 00:17:14.767 11676.283 - 11733.520: 98.3105% ( 5) 00:17:14.767 11733.520 - 11790.756: 98.3422% ( 5) 00:17:14.767 11790.756 - 11847.993: 98.3613% ( 3) 00:17:14.767 11847.993 - 11905.230: 98.3740% ( 2) 00:17:14.767 13450.620 - 13507.857: 98.3867% ( 2) 00:17:14.768 13507.857 - 13565.093: 98.4121% ( 4) 00:17:14.768 13565.093 - 13622.330: 98.4311% ( 3) 00:17:14.768 13622.330 - 13679.567: 98.4566% ( 4) 00:17:14.768 13679.567 - 13736.803: 98.4820% ( 4) 00:17:14.768 13736.803 - 13794.040: 98.5010% ( 3) 00:17:14.768 13794.040 - 13851.277: 98.5264% ( 4) 00:17:14.768 13851.277 - 13908.514: 98.5455% ( 3) 00:17:14.768 13908.514 - 13965.750: 98.5772% ( 5) 00:17:14.768 13965.750 - 14022.987: 98.6280% ( 8) 00:17:14.768 14022.987 - 14080.224: 98.6725% ( 7) 00:17:14.768 14080.224 - 14137.460: 98.7170% ( 7) 00:17:14.768 14137.460 - 14194.697: 98.7614% ( 7) 00:17:14.768 14194.697 - 14251.934: 98.7932% ( 5) 00:17:14.768 14251.934 - 14309.170: 98.8377% ( 7) 00:17:14.768 14309.170 - 14366.407: 98.8758% ( 6) 00:17:14.768 14366.407 - 14423.644: 98.9202% ( 7) 00:17:14.768 14423.644 - 14480.880: 98.9520% ( 5) 00:17:14.768 14480.880 - 14538.117: 99.0028% ( 8) 00:17:14.768 14538.117 - 14595.354: 99.0282% ( 4) 00:17:14.768 14595.354 - 14652.590: 99.0473% ( 3) 00:17:14.768 14652.590 - 14767.064: 99.0917% ( 7) 00:17:14.768 14767.064 - 14881.537: 99.1362% ( 7) 00:17:14.768 14881.537 - 14996.010: 99.1743% ( 6) 00:17:14.768 14996.010 - 15110.484: 99.1870% ( 2) 00:17:14.768 29190.707 - 29305.181: 99.1997% ( 2) 00:17:14.768 29305.181 - 29534.128: 99.2442% ( 7) 00:17:14.768 29534.128 - 29763.074: 99.3013% ( 9) 00:17:14.768 29763.074 - 29992.021: 99.3458% ( 7) 00:17:14.768 29992.021 - 30220.968: 99.3966% ( 8) 00:17:14.768 30220.968 - 30449.914: 99.4474% ( 8) 00:17:14.768 30449.914 - 30678.861: 99.4982% ( 8) 00:17:14.768 30678.861 - 30907.808: 99.5427% ( 7) 00:17:14.768 30907.808 - 31136.755: 99.5808% ( 6) 00:17:14.768 31136.755 - 31365.701: 99.5935% ( 2) 00:17:14.768 35486.742 - 35715.689: 99.6126% ( 3) 00:17:14.768 35715.689 - 35944.636: 99.6634% ( 8) 00:17:14.768 35944.636 - 36173.583: 99.7078% ( 7) 00:17:14.768 36173.583 - 36402.529: 99.7650% ( 9) 00:17:14.768 36402.529 - 36631.476: 99.8031% ( 6) 00:17:14.768 36631.476 - 36860.423: 99.8603% ( 9) 00:17:14.768 36860.423 - 37089.369: 99.9111% ( 8) 00:17:14.768 37089.369 - 37318.316: 99.9619% ( 8) 00:17:14.768 37318.316 - 37547.263: 100.0000% ( 6) 00:17:14.768 00:17:14.768 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:17:14.768 ============================================================================== 00:17:14.768 Range in us Cumulative IO count 00:17:14.768 6925.638 - 6954.257: 0.0063% ( 1) 00:17:14.768 6954.257 - 6982.875: 0.0696% ( 10) 00:17:14.768 6982.875 - 7011.493: 0.1771% ( 17) 00:17:14.768 7011.493 - 7040.112: 0.3353% ( 25) 00:17:14.768 7040.112 - 7068.730: 0.5314% ( 31) 00:17:14.768 7068.730 - 7097.348: 0.8477% ( 50) 00:17:14.768 7097.348 - 7125.967: 1.2589% ( 65) 00:17:14.768 7125.967 - 7154.585: 1.7333% ( 75) 00:17:14.768 7154.585 - 7183.203: 2.3785% ( 102) 00:17:14.768 7183.203 - 7211.822: 3.2389% ( 136) 00:17:14.768 7211.822 - 7240.440: 4.1688% ( 147) 00:17:14.768 7240.440 - 7269.059: 5.2442% ( 170) 00:17:14.768 7269.059 - 7297.677: 6.6296% ( 219) 00:17:14.768 7297.677 - 7326.295: 8.1857% ( 246) 00:17:14.768 7326.295 - 7383.532: 12.1078% ( 620) 00:17:14.768 7383.532 - 7440.769: 16.9977% ( 773) 00:17:14.768 7440.769 - 7498.005: 22.2799% ( 835) 00:17:14.768 7498.005 - 7555.242: 27.7454% ( 864) 00:17:14.768 7555.242 - 7612.479: 33.6475% ( 933) 00:17:14.768 7612.479 - 7669.715: 39.4610% ( 919) 00:17:14.768 7669.715 - 7726.952: 45.4327% ( 944) 00:17:14.768 7726.952 - 7784.189: 51.3854% ( 941) 00:17:14.768 7784.189 - 7841.425: 57.2811% ( 932) 00:17:14.768 7841.425 - 7898.662: 62.9555% ( 897) 00:17:14.768 7898.662 - 7955.899: 68.4653% ( 871) 00:17:14.768 7955.899 - 8013.135: 73.5956% ( 811) 00:17:14.768 8013.135 - 8070.372: 78.5741% ( 787) 00:17:14.768 8070.372 - 8127.609: 82.9580% ( 693) 00:17:14.768 8127.609 - 8184.845: 86.8168% ( 610) 00:17:14.768 8184.845 - 8242.082: 89.7773% ( 468) 00:17:14.768 8242.082 - 8299.319: 91.8269% ( 324) 00:17:14.768 8299.319 - 8356.555: 93.0352% ( 191) 00:17:14.768 8356.555 - 8413.792: 93.7437% ( 112) 00:17:14.768 8413.792 - 8471.029: 94.0663% ( 51) 00:17:14.768 8471.029 - 8528.266: 94.2181% ( 24) 00:17:14.768 8528.266 - 8585.502: 94.3320% ( 18) 00:17:14.768 8585.502 - 8642.739: 94.4269% ( 15) 00:17:14.768 8642.739 - 8699.976: 94.4965% ( 11) 00:17:14.768 8699.976 - 8757.212: 94.5660% ( 11) 00:17:14.768 8757.212 - 8814.449: 94.6356% ( 11) 00:17:14.768 8814.449 - 8871.686: 94.6989% ( 10) 00:17:14.768 8871.686 - 8928.922: 94.7621% ( 10) 00:17:14.768 8928.922 - 8986.159: 94.8191% ( 9) 00:17:14.768 8986.159 - 9043.396: 94.8570% ( 6) 00:17:14.768 9043.396 - 9100.632: 94.9140% ( 9) 00:17:14.768 9100.632 - 9157.869: 94.9836% ( 11) 00:17:14.768 9157.869 - 9215.106: 95.0784% ( 15) 00:17:14.768 9215.106 - 9272.342: 95.1860% ( 17) 00:17:14.768 9272.342 - 9329.579: 95.3062% ( 19) 00:17:14.768 9329.579 - 9386.816: 95.4137% ( 17) 00:17:14.768 9386.816 - 9444.052: 95.5213% ( 17) 00:17:14.768 9444.052 - 9501.289: 95.6414% ( 19) 00:17:14.768 9501.289 - 9558.526: 95.7553% ( 18) 00:17:14.768 9558.526 - 9615.762: 95.8502% ( 15) 00:17:14.768 9615.762 - 9672.999: 95.9641% ( 18) 00:17:14.768 9672.999 - 9730.236: 96.0779% ( 18) 00:17:14.768 9730.236 - 9787.472: 96.2045% ( 20) 00:17:14.768 9787.472 - 9844.709: 96.3120% ( 17) 00:17:14.768 9844.709 - 9901.946: 96.4195% ( 17) 00:17:14.768 9901.946 - 9959.183: 96.5271% ( 17) 00:17:14.768 9959.183 - 10016.419: 96.6473% ( 19) 00:17:14.768 10016.419 - 10073.656: 96.7548% ( 17) 00:17:14.768 10073.656 - 10130.893: 96.8623% ( 17) 00:17:14.768 10130.893 - 10188.129: 96.9572% ( 15) 00:17:14.768 10188.129 - 10245.366: 97.0395% ( 13) 00:17:14.768 10245.366 - 10302.603: 97.1344% ( 15) 00:17:14.768 10302.603 - 10359.839: 97.1976% ( 10) 00:17:14.768 10359.839 - 10417.076: 97.2609% ( 10) 00:17:14.768 10417.076 - 10474.313: 97.3178% ( 9) 00:17:14.768 10474.313 - 10531.549: 97.3811% ( 10) 00:17:14.768 10531.549 - 10588.786: 97.4380% ( 9) 00:17:14.768 10588.786 - 10646.023: 97.5013% ( 10) 00:17:14.768 10646.023 - 10703.259: 97.5772% ( 12) 00:17:14.768 10703.259 - 10760.496: 97.6657% ( 14) 00:17:14.768 10760.496 - 10817.733: 97.7227% ( 9) 00:17:14.768 10817.733 - 10874.969: 97.7670% ( 7) 00:17:14.768 10874.969 - 10932.206: 97.7986% ( 5) 00:17:14.768 10932.206 - 10989.443: 97.8302% ( 5) 00:17:14.768 10989.443 - 11046.679: 97.8682% ( 6) 00:17:14.768 11046.679 - 11103.916: 97.9124% ( 7) 00:17:14.768 11103.916 - 11161.153: 97.9441% ( 5) 00:17:14.768 11161.153 - 11218.390: 97.9884% ( 7) 00:17:14.768 11218.390 - 11275.626: 98.0137% ( 4) 00:17:14.768 11275.626 - 11332.863: 98.0579% ( 7) 00:17:14.768 11332.863 - 11390.100: 98.0959% ( 6) 00:17:14.768 11390.100 - 11447.336: 98.1339% ( 6) 00:17:14.768 11447.336 - 11504.573: 98.1718% ( 6) 00:17:14.768 11504.573 - 11561.810: 98.2098% ( 6) 00:17:14.768 11561.810 - 11619.046: 98.2351% ( 4) 00:17:14.768 11619.046 - 11676.283: 98.2730% ( 6) 00:17:14.768 11676.283 - 11733.520: 98.2920% ( 3) 00:17:14.768 11733.520 - 11790.756: 98.3110% ( 3) 00:17:14.768 11790.756 - 11847.993: 98.3236% ( 2) 00:17:14.768 11847.993 - 11905.230: 98.3426% ( 3) 00:17:14.768 11905.230 - 11962.466: 98.3616% ( 3) 00:17:14.769 11962.466 - 12019.703: 98.3806% ( 3) 00:17:14.769 13622.330 - 13679.567: 98.3932% ( 2) 00:17:14.769 13679.567 - 13736.803: 98.4375% ( 7) 00:17:14.769 13736.803 - 13794.040: 98.4818% ( 7) 00:17:14.769 13794.040 - 13851.277: 98.5261% ( 7) 00:17:14.769 13851.277 - 13908.514: 98.5640% ( 6) 00:17:14.769 13908.514 - 13965.750: 98.6083% ( 7) 00:17:14.769 13965.750 - 14022.987: 98.6526% ( 7) 00:17:14.769 14022.987 - 14080.224: 98.6969% ( 7) 00:17:14.769 14080.224 - 14137.460: 98.7411% ( 7) 00:17:14.769 14137.460 - 14194.697: 98.7854% ( 7) 00:17:14.769 14194.697 - 14251.934: 98.8234% ( 6) 00:17:14.769 14251.934 - 14309.170: 98.8677% ( 7) 00:17:14.769 14309.170 - 14366.407: 98.9056% ( 6) 00:17:14.769 14366.407 - 14423.644: 98.9436% ( 6) 00:17:14.769 14423.644 - 14480.880: 98.9815% ( 6) 00:17:14.769 14480.880 - 14538.117: 99.0195% ( 6) 00:17:14.769 14538.117 - 14595.354: 99.0574% ( 6) 00:17:14.769 14595.354 - 14652.590: 99.1080% ( 8) 00:17:14.769 14652.590 - 14767.064: 99.1776% ( 11) 00:17:14.769 14767.064 - 14881.537: 99.1903% ( 2) 00:17:14.769 21520.992 - 21635.466: 99.1966% ( 1) 00:17:14.769 21635.466 - 21749.939: 99.2219% ( 4) 00:17:14.769 21749.939 - 21864.412: 99.2472% ( 4) 00:17:14.769 21864.412 - 21978.886: 99.2725% ( 4) 00:17:14.769 21978.886 - 22093.359: 99.2915% ( 3) 00:17:14.769 22093.359 - 22207.832: 99.3231% ( 5) 00:17:14.769 22207.832 - 22322.306: 99.3421% ( 3) 00:17:14.769 22322.306 - 22436.779: 99.3674% ( 4) 00:17:14.769 22436.779 - 22551.252: 99.3927% ( 4) 00:17:14.769 22551.252 - 22665.726: 99.4180% ( 4) 00:17:14.769 22665.726 - 22780.199: 99.4496% ( 5) 00:17:14.769 22780.199 - 22894.672: 99.4749% ( 4) 00:17:14.769 22894.672 - 23009.146: 99.5003% ( 4) 00:17:14.769 23009.146 - 23123.619: 99.5319% ( 5) 00:17:14.769 23123.619 - 23238.093: 99.5572% ( 4) 00:17:14.769 23238.093 - 23352.566: 99.5825% ( 4) 00:17:14.769 23352.566 - 23467.039: 99.5951% ( 2) 00:17:14.769 28732.814 - 28847.287: 99.6141% ( 3) 00:17:14.769 28847.287 - 28961.761: 99.6394% ( 4) 00:17:14.769 28961.761 - 29076.234: 99.6647% ( 4) 00:17:14.769 29076.234 - 29190.707: 99.6900% ( 4) 00:17:14.769 29190.707 - 29305.181: 99.7153% ( 4) 00:17:14.769 29305.181 - 29534.128: 99.7659% ( 8) 00:17:14.769 29534.128 - 29763.074: 99.8102% ( 7) 00:17:14.769 29763.074 - 29992.021: 99.8545% ( 7) 00:17:14.769 29992.021 - 30220.968: 99.9051% ( 8) 00:17:14.769 30220.968 - 30449.914: 99.9557% ( 8) 00:17:14.769 30449.914 - 30678.861: 100.0000% ( 7) 00:17:14.769 00:17:14.769 09:31:15 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:17:15.708 Initializing NVMe Controllers 00:17:15.708 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:15.708 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:17:15.708 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:17:15.708 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:17:15.708 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:15.708 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:17:15.708 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:17:15.708 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:17:15.708 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:17:15.708 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:17:15.708 Initialization complete. Launching workers. 00:17:15.708 ======================================================== 00:17:15.708 Latency(us) 00:17:15.708 Device Information : IOPS MiB/s Average min max 00:17:15.708 PCIE (0000:00:10.0) NSID 1 from core 0: 8523.20 99.88 15074.67 9598.97 45037.56 00:17:15.708 PCIE (0000:00:11.0) NSID 1 from core 0: 8523.20 99.88 15057.17 9794.99 43899.85 00:17:15.708 PCIE (0000:00:13.0) NSID 1 from core 0: 8523.20 99.88 15037.38 9736.70 43104.61 00:17:15.708 PCIE (0000:00:12.0) NSID 1 from core 0: 8523.20 99.88 15018.86 9848.97 42198.46 00:17:15.708 PCIE (0000:00:12.0) NSID 2 from core 0: 8523.20 99.88 14999.77 9637.80 41292.16 00:17:15.708 PCIE (0000:00:12.0) NSID 3 from core 0: 8586.80 100.63 14868.78 9861.32 30113.79 00:17:15.708 ======================================================== 00:17:15.708 Total : 51202.80 600.03 15009.26 9598.97 45037.56 00:17:15.708 00:17:15.708 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:17:15.708 ================================================================================= 00:17:15.708 1.00000% : 10016.419us 00:17:15.708 10.00000% : 11847.993us 00:17:15.708 25.00000% : 13278.910us 00:17:15.708 50.00000% : 14996.010us 00:17:15.708 75.00000% : 16369.691us 00:17:15.708 90.00000% : 17514.424us 00:17:15.708 95.00000% : 18086.791us 00:17:15.708 98.00000% : 19460.472us 00:17:15.708 99.00000% : 33655.169us 00:17:15.708 99.50000% : 43041.984us 00:17:15.708 99.90000% : 44644.611us 00:17:15.708 99.99000% : 45102.505us 00:17:15.708 99.99900% : 45102.505us 00:17:15.708 99.99990% : 45102.505us 00:17:15.708 99.99999% : 45102.505us 00:17:15.708 00:17:15.708 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:17:15.708 ================================================================================= 00:17:15.708 1.00000% : 10188.129us 00:17:15.708 10.00000% : 11790.756us 00:17:15.708 25.00000% : 13278.910us 00:17:15.708 50.00000% : 15110.484us 00:17:15.708 75.00000% : 16369.691us 00:17:15.708 90.00000% : 17399.951us 00:17:15.708 95.00000% : 18086.791us 00:17:15.708 98.00000% : 19231.525us 00:17:15.708 99.00000% : 32281.488us 00:17:15.708 99.50000% : 42126.197us 00:17:15.708 99.90000% : 43728.824us 00:17:15.708 99.99000% : 43957.771us 00:17:15.708 99.99900% : 43957.771us 00:17:15.708 99.99990% : 43957.771us 00:17:15.708 99.99999% : 43957.771us 00:17:15.708 00:17:15.708 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:17:15.708 ================================================================================= 00:17:15.708 1.00000% : 10188.129us 00:17:15.708 10.00000% : 11905.230us 00:17:15.708 25.00000% : 13278.910us 00:17:15.708 50.00000% : 14996.010us 00:17:15.708 75.00000% : 16369.691us 00:17:15.708 90.00000% : 17399.951us 00:17:15.709 95.00000% : 18086.791us 00:17:15.709 98.00000% : 18773.631us 00:17:15.709 99.00000% : 31594.648us 00:17:15.709 99.50000% : 41439.357us 00:17:15.709 99.90000% : 42813.038us 00:17:15.709 99.99000% : 43270.931us 00:17:15.709 99.99900% : 43270.931us 00:17:15.709 99.99990% : 43270.931us 00:17:15.709 99.99999% : 43270.931us 00:17:15.709 00:17:15.709 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:17:15.709 ================================================================================= 00:17:15.709 1.00000% : 10188.129us 00:17:15.709 10.00000% : 11790.756us 00:17:15.709 25.00000% : 13107.200us 00:17:15.709 50.00000% : 14881.537us 00:17:15.709 75.00000% : 16484.164us 00:17:15.709 90.00000% : 17514.424us 00:17:15.709 95.00000% : 18430.211us 00:17:15.709 98.00000% : 19574.945us 00:17:15.709 99.00000% : 30220.968us 00:17:15.709 99.50000% : 40523.570us 00:17:15.709 99.90000% : 41897.251us 00:17:15.709 99.99000% : 42355.144us 00:17:15.709 99.99900% : 42355.144us 00:17:15.709 99.99990% : 42355.144us 00:17:15.709 99.99999% : 42355.144us 00:17:15.709 00:17:15.709 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:17:15.709 ================================================================================= 00:17:15.709 1.00000% : 10130.893us 00:17:15.709 10.00000% : 11733.520us 00:17:15.709 25.00000% : 13278.910us 00:17:15.709 50.00000% : 14996.010us 00:17:15.709 75.00000% : 16484.164us 00:17:15.709 90.00000% : 17399.951us 00:17:15.709 95.00000% : 18086.791us 00:17:15.709 98.00000% : 19460.472us 00:17:15.709 99.00000% : 28503.867us 00:17:15.709 99.50000% : 39607.783us 00:17:15.709 99.90000% : 40981.464us 00:17:15.709 99.99000% : 41439.357us 00:17:15.709 99.99900% : 41439.357us 00:17:15.709 99.99990% : 41439.357us 00:17:15.709 99.99999% : 41439.357us 00:17:15.709 00:17:15.709 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:17:15.709 ================================================================================= 00:17:15.709 1.00000% : 10245.366us 00:17:15.709 10.00000% : 11790.756us 00:17:15.709 25.00000% : 13336.147us 00:17:15.709 50.00000% : 15110.484us 00:17:15.709 75.00000% : 16484.164us 00:17:15.709 90.00000% : 17399.951us 00:17:15.709 95.00000% : 17972.318us 00:17:15.709 98.00000% : 19117.052us 00:17:15.709 99.00000% : 20147.312us 00:17:15.709 99.50000% : 28274.921us 00:17:15.709 99.90000% : 29992.021us 00:17:15.709 99.99000% : 30220.968us 00:17:15.709 99.99900% : 30220.968us 00:17:15.709 99.99990% : 30220.968us 00:17:15.709 99.99999% : 30220.968us 00:17:15.709 00:17:15.709 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:17:15.709 ============================================================================== 00:17:15.709 Range in us Cumulative IO count 00:17:15.709 9558.526 - 9615.762: 0.0350% ( 3) 00:17:15.709 9615.762 - 9672.999: 0.0700% ( 3) 00:17:15.709 9672.999 - 9730.236: 0.1399% ( 6) 00:17:15.709 9730.236 - 9787.472: 0.3032% ( 14) 00:17:15.709 9787.472 - 9844.709: 0.4781% ( 15) 00:17:15.709 9844.709 - 9901.946: 0.6763% ( 17) 00:17:15.709 9901.946 - 9959.183: 0.9678% ( 25) 00:17:15.709 9959.183 - 10016.419: 1.1660% ( 17) 00:17:15.709 10016.419 - 10073.656: 1.3759% ( 18) 00:17:15.709 10073.656 - 10130.893: 1.6325% ( 22) 00:17:15.709 10130.893 - 10188.129: 1.7607% ( 11) 00:17:15.709 10188.129 - 10245.366: 1.9240% ( 14) 00:17:15.709 10245.366 - 10302.603: 2.1105% ( 16) 00:17:15.709 10302.603 - 10359.839: 2.1922% ( 7) 00:17:15.709 10359.839 - 10417.076: 2.2854% ( 8) 00:17:15.709 10417.076 - 10474.313: 2.4137% ( 11) 00:17:15.709 10474.313 - 10531.549: 2.5886% ( 15) 00:17:15.709 10531.549 - 10588.786: 2.8801% ( 25) 00:17:15.709 10588.786 - 10646.023: 3.1600% ( 24) 00:17:15.709 10646.023 - 10703.259: 3.3932% ( 20) 00:17:15.709 10703.259 - 10760.496: 3.5798% ( 16) 00:17:15.709 10760.496 - 10817.733: 3.8246% ( 21) 00:17:15.709 10817.733 - 10874.969: 4.0928% ( 23) 00:17:15.709 10874.969 - 10932.206: 4.3260% ( 20) 00:17:15.709 10932.206 - 10989.443: 4.5359% ( 18) 00:17:15.709 10989.443 - 11046.679: 4.7108% ( 15) 00:17:15.709 11046.679 - 11103.916: 4.8507% ( 12) 00:17:15.709 11103.916 - 11161.153: 5.0140% ( 14) 00:17:15.709 11161.153 - 11218.390: 5.1772% ( 14) 00:17:15.709 11218.390 - 11275.626: 5.3871% ( 18) 00:17:15.709 11275.626 - 11332.863: 5.7836% ( 34) 00:17:15.709 11332.863 - 11390.100: 6.0401% ( 22) 00:17:15.709 11390.100 - 11447.336: 6.3083% ( 23) 00:17:15.709 11447.336 - 11504.573: 6.6814% ( 32) 00:17:15.709 11504.573 - 11561.810: 7.1828% ( 43) 00:17:15.709 11561.810 - 11619.046: 7.7659% ( 50) 00:17:15.709 11619.046 - 11676.283: 8.2206% ( 39) 00:17:15.709 11676.283 - 11733.520: 8.7687% ( 47) 00:17:15.709 11733.520 - 11790.756: 9.4216% ( 56) 00:17:15.709 11790.756 - 11847.993: 10.1679% ( 64) 00:17:15.709 11847.993 - 11905.230: 10.8442% ( 58) 00:17:15.709 11905.230 - 11962.466: 11.4506% ( 52) 00:17:15.709 11962.466 - 12019.703: 11.9986% ( 47) 00:17:15.709 12019.703 - 12076.940: 12.7215% ( 62) 00:17:15.709 12076.940 - 12134.176: 13.3046% ( 50) 00:17:15.709 12134.176 - 12191.413: 13.7593% ( 39) 00:17:15.709 12191.413 - 12248.650: 14.3074% ( 47) 00:17:15.709 12248.650 - 12305.886: 15.0420% ( 63) 00:17:15.709 12305.886 - 12363.123: 15.5550% ( 44) 00:17:15.709 12363.123 - 12420.360: 16.0914% ( 46) 00:17:15.709 12420.360 - 12477.597: 16.6395% ( 47) 00:17:15.709 12477.597 - 12534.833: 17.2691% ( 54) 00:17:15.709 12534.833 - 12592.070: 17.8288% ( 48) 00:17:15.709 12592.070 - 12649.307: 18.3652% ( 46) 00:17:15.709 12649.307 - 12706.543: 19.0882% ( 62) 00:17:15.709 12706.543 - 12763.780: 19.7178% ( 54) 00:17:15.709 12763.780 - 12821.017: 20.3591% ( 55) 00:17:15.709 12821.017 - 12878.253: 20.9771% ( 53) 00:17:15.709 12878.253 - 12935.490: 21.6418% ( 57) 00:17:15.709 12935.490 - 12992.727: 22.2715% ( 54) 00:17:15.709 12992.727 - 13049.963: 22.8545% ( 50) 00:17:15.709 13049.963 - 13107.200: 23.4608% ( 52) 00:17:15.709 13107.200 - 13164.437: 24.1138% ( 56) 00:17:15.709 13164.437 - 13221.673: 24.6502% ( 46) 00:17:15.709 13221.673 - 13278.910: 25.3148% ( 57) 00:17:15.709 13278.910 - 13336.147: 25.9095% ( 51) 00:17:15.709 13336.147 - 13393.383: 26.6325% ( 62) 00:17:15.709 13393.383 - 13450.620: 27.4021% ( 66) 00:17:15.709 13450.620 - 13507.857: 28.2999% ( 77) 00:17:15.709 13507.857 - 13565.093: 29.1744% ( 75) 00:17:15.709 13565.093 - 13622.330: 29.9324% ( 65) 00:17:15.709 13622.330 - 13679.567: 30.7369% ( 69) 00:17:15.709 13679.567 - 13736.803: 31.8447% ( 95) 00:17:15.709 13736.803 - 13794.040: 32.4977% ( 56) 00:17:15.709 13794.040 - 13851.277: 33.1390% ( 55) 00:17:15.709 13851.277 - 13908.514: 33.7570% ( 53) 00:17:15.709 13908.514 - 13965.750: 34.2701% ( 44) 00:17:15.709 13965.750 - 14022.987: 34.9580% ( 59) 00:17:15.709 14022.987 - 14080.224: 35.4944% ( 46) 00:17:15.709 14080.224 - 14137.460: 36.0308% ( 46) 00:17:15.709 14137.460 - 14194.697: 37.0103% ( 84) 00:17:15.709 14194.697 - 14251.934: 37.7332% ( 62) 00:17:15.709 14251.934 - 14309.170: 38.6544% ( 79) 00:17:15.709 14309.170 - 14366.407: 39.4240% ( 66) 00:17:15.709 14366.407 - 14423.644: 40.2169% ( 68) 00:17:15.709 14423.644 - 14480.880: 41.2547% ( 89) 00:17:15.709 14480.880 - 14538.117: 42.1992% ( 81) 00:17:15.709 14538.117 - 14595.354: 43.1203% ( 79) 00:17:15.709 14595.354 - 14652.590: 43.9949% ( 75) 00:17:15.709 14652.590 - 14767.064: 46.0238% ( 174) 00:17:15.709 14767.064 - 14881.537: 48.0177% ( 171) 00:17:15.709 14881.537 - 14996.010: 50.5014% ( 213) 00:17:15.709 14996.010 - 15110.484: 52.7052% ( 189) 00:17:15.709 15110.484 - 15224.957: 55.0140% ( 198) 00:17:15.709 15224.957 - 15339.431: 56.9963% ( 170) 00:17:15.709 15339.431 - 15453.904: 59.5033% ( 215) 00:17:15.709 15453.904 - 15568.377: 61.6138% ( 181) 00:17:15.709 15568.377 - 15682.851: 63.7826% ( 186) 00:17:15.709 15682.851 - 15797.324: 65.7999% ( 173) 00:17:15.709 15797.324 - 15911.797: 67.6189% ( 156) 00:17:15.709 15911.797 - 16026.271: 69.8694% ( 193) 00:17:15.709 16026.271 - 16140.744: 72.2248% ( 202) 00:17:15.709 16140.744 - 16255.217: 74.2771% ( 176) 00:17:15.709 16255.217 - 16369.691: 76.1194% ( 158) 00:17:15.709 16369.691 - 16484.164: 77.8218% ( 146) 00:17:15.709 16484.164 - 16598.638: 79.3493% ( 131) 00:17:15.709 16598.638 - 16713.111: 80.8535% ( 129) 00:17:15.709 16713.111 - 16827.584: 82.2878% ( 123) 00:17:15.709 16827.584 - 16942.058: 83.7920% ( 129) 00:17:15.709 16942.058 - 17056.531: 85.3312% ( 132) 00:17:15.709 17056.531 - 17171.004: 86.7421% ( 121) 00:17:15.709 17171.004 - 17285.478: 88.4795% ( 149) 00:17:15.709 17285.478 - 17399.951: 89.7155% ( 106) 00:17:15.709 17399.951 - 17514.424: 90.7766% ( 91) 00:17:15.709 17514.424 - 17628.898: 91.7910% ( 87) 00:17:15.709 17628.898 - 17743.371: 93.0854% ( 111) 00:17:15.709 17743.371 - 17857.845: 93.8783% ( 68) 00:17:15.709 17857.845 - 17972.318: 94.6129% ( 63) 00:17:15.709 17972.318 - 18086.791: 95.1259% ( 44) 00:17:15.709 18086.791 - 18201.265: 95.6040% ( 41) 00:17:15.709 18201.265 - 18315.738: 96.0238% ( 36) 00:17:15.709 18315.738 - 18430.211: 96.3503% ( 28) 00:17:15.709 18430.211 - 18544.685: 96.6651% ( 27) 00:17:15.709 18544.685 - 18659.158: 96.8867% ( 19) 00:17:15.709 18659.158 - 18773.631: 97.1315% ( 21) 00:17:15.709 18773.631 - 18888.105: 97.3997% ( 23) 00:17:15.709 18888.105 - 19002.578: 97.5396% ( 12) 00:17:15.709 19002.578 - 19117.052: 97.7845% ( 21) 00:17:15.709 19117.052 - 19231.525: 97.8895% ( 9) 00:17:15.709 19231.525 - 19345.998: 97.9827% ( 8) 00:17:15.710 19345.998 - 19460.472: 98.0177% ( 3) 00:17:15.710 19460.472 - 19574.945: 98.0527% ( 3) 00:17:15.710 19574.945 - 19689.418: 98.1110% ( 5) 00:17:15.710 19689.418 - 19803.892: 98.1576% ( 4) 00:17:15.710 19803.892 - 19918.365: 98.2160% ( 5) 00:17:15.710 19918.365 - 20032.838: 98.2859% ( 6) 00:17:15.710 20032.838 - 20147.312: 98.3675% ( 7) 00:17:15.710 20147.312 - 20261.785: 98.4492% ( 7) 00:17:15.710 20261.785 - 20376.259: 98.4841% ( 3) 00:17:15.710 20376.259 - 20490.732: 98.4958% ( 1) 00:17:15.710 20605.205 - 20719.679: 98.5075% ( 1) 00:17:15.710 32281.488 - 32510.435: 98.5658% ( 5) 00:17:15.710 32510.435 - 32739.382: 98.7057% ( 12) 00:17:15.710 32739.382 - 32968.328: 98.8689% ( 14) 00:17:15.710 32968.328 - 33197.275: 98.8923% ( 2) 00:17:15.710 33197.275 - 33426.222: 98.9622% ( 6) 00:17:15.710 33426.222 - 33655.169: 99.0555% ( 8) 00:17:15.710 33655.169 - 33884.115: 99.1255% ( 6) 00:17:15.710 33884.115 - 34113.062: 99.2304% ( 9) 00:17:15.710 34113.062 - 34342.009: 99.2537% ( 2) 00:17:15.710 41668.304 - 41897.251: 99.2771% ( 2) 00:17:15.710 41897.251 - 42126.197: 99.3120% ( 3) 00:17:15.710 42126.197 - 42355.144: 99.3703% ( 5) 00:17:15.710 42355.144 - 42584.091: 99.4403% ( 6) 00:17:15.710 42584.091 - 42813.038: 99.4753% ( 3) 00:17:15.710 42813.038 - 43041.984: 99.5219% ( 4) 00:17:15.710 43041.984 - 43270.931: 99.5802% ( 5) 00:17:15.710 43270.931 - 43499.878: 99.6385% ( 5) 00:17:15.710 43499.878 - 43728.824: 99.6852% ( 4) 00:17:15.710 43728.824 - 43957.771: 99.7435% ( 5) 00:17:15.710 43957.771 - 44186.718: 99.7901% ( 4) 00:17:15.710 44186.718 - 44415.665: 99.8484% ( 5) 00:17:15.710 44415.665 - 44644.611: 99.9067% ( 5) 00:17:15.710 44644.611 - 44873.558: 99.9650% ( 5) 00:17:15.710 44873.558 - 45102.505: 100.0000% ( 3) 00:17:15.710 00:17:15.710 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:17:15.710 ============================================================================== 00:17:15.710 Range in us Cumulative IO count 00:17:15.710 9787.472 - 9844.709: 0.0233% ( 2) 00:17:15.710 9844.709 - 9901.946: 0.0816% ( 5) 00:17:15.710 9901.946 - 9959.183: 0.1749% ( 8) 00:17:15.710 9959.183 - 10016.419: 0.3265% ( 13) 00:17:15.710 10016.419 - 10073.656: 0.5714% ( 21) 00:17:15.710 10073.656 - 10130.893: 0.8046% ( 20) 00:17:15.710 10130.893 - 10188.129: 1.1311% ( 28) 00:17:15.710 10188.129 - 10245.366: 1.4925% ( 31) 00:17:15.710 10245.366 - 10302.603: 2.0173% ( 45) 00:17:15.710 10302.603 - 10359.839: 2.3438% ( 28) 00:17:15.710 10359.839 - 10417.076: 2.6236% ( 24) 00:17:15.710 10417.076 - 10474.313: 2.9967% ( 32) 00:17:15.710 10474.313 - 10531.549: 3.2649% ( 23) 00:17:15.710 10531.549 - 10588.786: 3.5564% ( 25) 00:17:15.710 10588.786 - 10646.023: 3.8363% ( 24) 00:17:15.710 10646.023 - 10703.259: 4.1278% ( 25) 00:17:15.710 10703.259 - 10760.496: 4.4193% ( 25) 00:17:15.710 10760.496 - 10817.733: 4.5592% ( 12) 00:17:15.710 10817.733 - 10874.969: 4.7225% ( 14) 00:17:15.710 10874.969 - 10932.206: 4.8158% ( 8) 00:17:15.710 10932.206 - 10989.443: 4.9440% ( 11) 00:17:15.710 10989.443 - 11046.679: 5.1306% ( 16) 00:17:15.710 11046.679 - 11103.916: 5.3055% ( 15) 00:17:15.710 11103.916 - 11161.153: 5.5620% ( 22) 00:17:15.710 11161.153 - 11218.390: 5.7020% ( 12) 00:17:15.710 11218.390 - 11275.626: 5.8419% ( 12) 00:17:15.710 11275.626 - 11332.863: 5.9585% ( 10) 00:17:15.710 11332.863 - 11390.100: 6.1917% ( 20) 00:17:15.710 11390.100 - 11447.336: 6.5182% ( 28) 00:17:15.710 11447.336 - 11504.573: 6.8797% ( 31) 00:17:15.710 11504.573 - 11561.810: 7.2994% ( 36) 00:17:15.710 11561.810 - 11619.046: 7.7425% ( 38) 00:17:15.710 11619.046 - 11676.283: 8.3605% ( 53) 00:17:15.710 11676.283 - 11733.520: 9.2584% ( 77) 00:17:15.710 11733.520 - 11790.756: 10.0396% ( 67) 00:17:15.710 11790.756 - 11847.993: 10.6576% ( 53) 00:17:15.710 11847.993 - 11905.230: 11.2873% ( 54) 00:17:15.710 11905.230 - 11962.466: 11.8354% ( 47) 00:17:15.710 11962.466 - 12019.703: 12.3251% ( 42) 00:17:15.710 12019.703 - 12076.940: 12.8265% ( 43) 00:17:15.710 12076.940 - 12134.176: 13.4445% ( 53) 00:17:15.710 12134.176 - 12191.413: 14.0275% ( 50) 00:17:15.710 12191.413 - 12248.650: 14.5989% ( 49) 00:17:15.710 12248.650 - 12305.886: 15.0653% ( 40) 00:17:15.710 12305.886 - 12363.123: 15.4967% ( 37) 00:17:15.710 12363.123 - 12420.360: 15.8815% ( 33) 00:17:15.710 12420.360 - 12477.597: 16.2663% ( 33) 00:17:15.710 12477.597 - 12534.833: 16.7327% ( 40) 00:17:15.710 12534.833 - 12592.070: 17.2225% ( 42) 00:17:15.710 12592.070 - 12649.307: 17.8055% ( 50) 00:17:15.710 12649.307 - 12706.543: 18.5168% ( 61) 00:17:15.710 12706.543 - 12763.780: 19.3913% ( 75) 00:17:15.710 12763.780 - 12821.017: 20.2076% ( 70) 00:17:15.710 12821.017 - 12878.253: 20.8839% ( 58) 00:17:15.710 12878.253 - 12935.490: 21.5718% ( 59) 00:17:15.710 12935.490 - 12992.727: 22.2598% ( 59) 00:17:15.710 12992.727 - 13049.963: 23.0993% ( 72) 00:17:15.710 13049.963 - 13107.200: 23.6824% ( 50) 00:17:15.710 13107.200 - 13164.437: 24.2188% ( 46) 00:17:15.710 13164.437 - 13221.673: 24.8018% ( 50) 00:17:15.710 13221.673 - 13278.910: 25.4081% ( 52) 00:17:15.710 13278.910 - 13336.147: 26.1311% ( 62) 00:17:15.710 13336.147 - 13393.383: 26.8190% ( 59) 00:17:15.710 13393.383 - 13450.620: 27.3904% ( 49) 00:17:15.710 13450.620 - 13507.857: 27.9734% ( 50) 00:17:15.710 13507.857 - 13565.093: 28.7430% ( 66) 00:17:15.710 13565.093 - 13622.330: 29.4660% ( 62) 00:17:15.710 13622.330 - 13679.567: 30.3638% ( 77) 00:17:15.710 13679.567 - 13736.803: 31.0751% ( 61) 00:17:15.710 13736.803 - 13794.040: 31.7164% ( 55) 00:17:15.710 13794.040 - 13851.277: 32.3228% ( 52) 00:17:15.710 13851.277 - 13908.514: 32.9991% ( 58) 00:17:15.710 13908.514 - 13965.750: 33.5938% ( 51) 00:17:15.710 13965.750 - 14022.987: 34.2118% ( 53) 00:17:15.710 14022.987 - 14080.224: 34.6898% ( 41) 00:17:15.710 14080.224 - 14137.460: 35.2612% ( 49) 00:17:15.710 14137.460 - 14194.697: 35.9025% ( 55) 00:17:15.710 14194.697 - 14251.934: 36.4855% ( 50) 00:17:15.710 14251.934 - 14309.170: 36.9636% ( 41) 00:17:15.710 14309.170 - 14366.407: 37.4650% ( 43) 00:17:15.710 14366.407 - 14423.644: 38.0480% ( 50) 00:17:15.710 14423.644 - 14480.880: 38.8643% ( 70) 00:17:15.710 14480.880 - 14538.117: 39.8321% ( 83) 00:17:15.710 14538.117 - 14595.354: 40.9049% ( 92) 00:17:15.710 14595.354 - 14652.590: 41.9426% ( 89) 00:17:15.710 14652.590 - 14767.064: 44.1465% ( 189) 00:17:15.710 14767.064 - 14881.537: 46.2337% ( 179) 00:17:15.710 14881.537 - 14996.010: 48.8223% ( 222) 00:17:15.710 14996.010 - 15110.484: 51.3410% ( 216) 00:17:15.710 15110.484 - 15224.957: 53.9062% ( 220) 00:17:15.710 15224.957 - 15339.431: 56.7864% ( 247) 00:17:15.710 15339.431 - 15453.904: 59.5499% ( 237) 00:17:15.710 15453.904 - 15568.377: 62.3368% ( 239) 00:17:15.710 15568.377 - 15682.851: 64.7621% ( 208) 00:17:15.710 15682.851 - 15797.324: 67.0126% ( 193) 00:17:15.710 15797.324 - 15911.797: 69.2281% ( 190) 00:17:15.710 15911.797 - 16026.271: 71.2570% ( 174) 00:17:15.710 16026.271 - 16140.744: 72.8545% ( 137) 00:17:15.710 16140.744 - 16255.217: 74.2304% ( 118) 00:17:15.710 16255.217 - 16369.691: 75.8979% ( 143) 00:17:15.710 16369.691 - 16484.164: 77.5886% ( 145) 00:17:15.710 16484.164 - 16598.638: 79.5243% ( 166) 00:17:15.710 16598.638 - 16713.111: 81.2850% ( 151) 00:17:15.710 16713.111 - 16827.584: 82.8125% ( 131) 00:17:15.710 16827.584 - 16942.058: 84.1535% ( 115) 00:17:15.710 16942.058 - 17056.531: 85.6227% ( 126) 00:17:15.710 17056.531 - 17171.004: 87.1735% ( 133) 00:17:15.710 17171.004 - 17285.478: 88.8410% ( 143) 00:17:15.710 17285.478 - 17399.951: 90.1586% ( 113) 00:17:15.710 17399.951 - 17514.424: 91.2896% ( 97) 00:17:15.710 17514.424 - 17628.898: 92.3507% ( 91) 00:17:15.710 17628.898 - 17743.371: 93.2136% ( 74) 00:17:15.710 17743.371 - 17857.845: 94.0882% ( 75) 00:17:15.710 17857.845 - 17972.318: 94.9044% ( 70) 00:17:15.710 17972.318 - 18086.791: 95.8022% ( 77) 00:17:15.710 18086.791 - 18201.265: 96.3969% ( 51) 00:17:15.710 18201.265 - 18315.738: 96.7817% ( 33) 00:17:15.710 18315.738 - 18430.211: 97.0149% ( 20) 00:17:15.710 18430.211 - 18544.685: 97.1898% ( 15) 00:17:15.710 18544.685 - 18659.158: 97.3298% ( 12) 00:17:15.710 18659.158 - 18773.631: 97.4697% ( 12) 00:17:15.710 18773.631 - 18888.105: 97.6213% ( 13) 00:17:15.710 18888.105 - 19002.578: 97.7612% ( 12) 00:17:15.710 19002.578 - 19117.052: 97.8895% ( 11) 00:17:15.710 19117.052 - 19231.525: 98.0993% ( 18) 00:17:15.710 19231.525 - 19345.998: 98.3559% ( 22) 00:17:15.710 19345.998 - 19460.472: 98.4958% ( 12) 00:17:15.710 19460.472 - 19574.945: 98.5075% ( 1) 00:17:15.710 30907.808 - 31136.755: 98.5658% ( 5) 00:17:15.710 31136.755 - 31365.701: 98.6707% ( 9) 00:17:15.710 31365.701 - 31594.648: 98.7640% ( 8) 00:17:15.710 31594.648 - 31823.595: 98.8573% ( 8) 00:17:15.710 31823.595 - 32052.541: 98.9506% ( 8) 00:17:15.710 32052.541 - 32281.488: 99.0322% ( 7) 00:17:15.710 32281.488 - 32510.435: 99.1255% ( 8) 00:17:15.710 32510.435 - 32739.382: 99.2188% ( 8) 00:17:15.710 32739.382 - 32968.328: 99.2537% ( 3) 00:17:15.710 40981.464 - 41210.410: 99.3004% ( 4) 00:17:15.710 41210.410 - 41439.357: 99.3587% ( 5) 00:17:15.710 41439.357 - 41668.304: 99.4170% ( 5) 00:17:15.710 41668.304 - 41897.251: 99.4753% ( 5) 00:17:15.710 41897.251 - 42126.197: 99.5336% ( 5) 00:17:15.710 42126.197 - 42355.144: 99.5919% ( 5) 00:17:15.711 42355.144 - 42584.091: 99.6502% ( 5) 00:17:15.711 42584.091 - 42813.038: 99.7085% ( 5) 00:17:15.711 42813.038 - 43041.984: 99.7785% ( 6) 00:17:15.711 43041.984 - 43270.931: 99.8368% ( 5) 00:17:15.711 43270.931 - 43499.878: 99.8951% ( 5) 00:17:15.711 43499.878 - 43728.824: 99.9417% ( 4) 00:17:15.711 43728.824 - 43957.771: 100.0000% ( 5) 00:17:15.711 00:17:15.711 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:17:15.711 ============================================================================== 00:17:15.711 Range in us Cumulative IO count 00:17:15.711 9730.236 - 9787.472: 0.0466% ( 4) 00:17:15.711 9787.472 - 9844.709: 0.0933% ( 4) 00:17:15.711 9844.709 - 9901.946: 0.1283% ( 3) 00:17:15.711 9901.946 - 9959.183: 0.2449% ( 10) 00:17:15.711 9959.183 - 10016.419: 0.4081% ( 14) 00:17:15.711 10016.419 - 10073.656: 0.5947% ( 16) 00:17:15.711 10073.656 - 10130.893: 0.9212% ( 28) 00:17:15.711 10130.893 - 10188.129: 1.2477% ( 28) 00:17:15.711 10188.129 - 10245.366: 1.6325% ( 33) 00:17:15.711 10245.366 - 10302.603: 1.9823% ( 30) 00:17:15.711 10302.603 - 10359.839: 2.3787% ( 34) 00:17:15.711 10359.839 - 10417.076: 2.6936% ( 27) 00:17:15.711 10417.076 - 10474.313: 3.0667% ( 32) 00:17:15.711 10474.313 - 10531.549: 3.2882% ( 19) 00:17:15.711 10531.549 - 10588.786: 3.4981% ( 18) 00:17:15.711 10588.786 - 10646.023: 3.6614% ( 14) 00:17:15.711 10646.023 - 10703.259: 3.7780% ( 10) 00:17:15.711 10703.259 - 10760.496: 4.0462% ( 23) 00:17:15.711 10760.496 - 10817.733: 4.3144% ( 23) 00:17:15.711 10817.733 - 10874.969: 4.4660% ( 13) 00:17:15.711 10874.969 - 10932.206: 4.6875% ( 19) 00:17:15.711 10932.206 - 10989.443: 4.9557% ( 23) 00:17:15.711 10989.443 - 11046.679: 5.3638% ( 35) 00:17:15.711 11046.679 - 11103.916: 5.6320% ( 23) 00:17:15.711 11103.916 - 11161.153: 5.8069% ( 15) 00:17:15.711 11161.153 - 11218.390: 5.9352% ( 11) 00:17:15.711 11218.390 - 11275.626: 6.0401% ( 9) 00:17:15.711 11275.626 - 11332.863: 6.1217% ( 7) 00:17:15.711 11332.863 - 11390.100: 6.2150% ( 8) 00:17:15.711 11390.100 - 11447.336: 6.3316% ( 10) 00:17:15.711 11447.336 - 11504.573: 6.5532% ( 19) 00:17:15.711 11504.573 - 11561.810: 6.8097% ( 22) 00:17:15.711 11561.810 - 11619.046: 7.1362% ( 28) 00:17:15.711 11619.046 - 11676.283: 7.5793% ( 38) 00:17:15.711 11676.283 - 11733.520: 8.1740% ( 51) 00:17:15.711 11733.520 - 11790.756: 9.0252% ( 73) 00:17:15.711 11790.756 - 11847.993: 9.7481% ( 62) 00:17:15.711 11847.993 - 11905.230: 10.4478% ( 60) 00:17:15.711 11905.230 - 11962.466: 11.2640% ( 70) 00:17:15.711 11962.466 - 12019.703: 12.0103% ( 64) 00:17:15.711 12019.703 - 12076.940: 12.6049% ( 51) 00:17:15.711 12076.940 - 12134.176: 13.3396% ( 63) 00:17:15.711 12134.176 - 12191.413: 13.9925% ( 56) 00:17:15.711 12191.413 - 12248.650: 14.4939% ( 43) 00:17:15.711 12248.650 - 12305.886: 15.0653% ( 49) 00:17:15.711 12305.886 - 12363.123: 15.6250% ( 48) 00:17:15.711 12363.123 - 12420.360: 16.0098% ( 33) 00:17:15.711 12420.360 - 12477.597: 16.3363% ( 28) 00:17:15.711 12477.597 - 12534.833: 16.6395% ( 26) 00:17:15.711 12534.833 - 12592.070: 17.0942% ( 39) 00:17:15.711 12592.070 - 12649.307: 17.6889% ( 51) 00:17:15.711 12649.307 - 12706.543: 18.2836% ( 51) 00:17:15.711 12706.543 - 12763.780: 18.8200% ( 46) 00:17:15.711 12763.780 - 12821.017: 19.2747% ( 39) 00:17:15.711 12821.017 - 12878.253: 20.1143% ( 72) 00:17:15.711 12878.253 - 12935.490: 20.9888% ( 75) 00:17:15.711 12935.490 - 12992.727: 21.5951% ( 52) 00:17:15.711 12992.727 - 13049.963: 22.1898% ( 51) 00:17:15.711 13049.963 - 13107.200: 22.9128% ( 62) 00:17:15.711 13107.200 - 13164.437: 23.5891% ( 58) 00:17:15.711 13164.437 - 13221.673: 24.2771% ( 59) 00:17:15.711 13221.673 - 13278.910: 25.0000% ( 62) 00:17:15.711 13278.910 - 13336.147: 25.7346% ( 63) 00:17:15.711 13336.147 - 13393.383: 26.3759% ( 55) 00:17:15.711 13393.383 - 13450.620: 27.1688% ( 68) 00:17:15.711 13450.620 - 13507.857: 27.8451% ( 58) 00:17:15.711 13507.857 - 13565.093: 28.4865% ( 55) 00:17:15.711 13565.093 - 13622.330: 29.1278% ( 55) 00:17:15.711 13622.330 - 13679.567: 30.1073% ( 84) 00:17:15.711 13679.567 - 13736.803: 31.0051% ( 77) 00:17:15.711 13736.803 - 13794.040: 31.9146% ( 78) 00:17:15.711 13794.040 - 13851.277: 32.7542% ( 72) 00:17:15.711 13851.277 - 13908.514: 33.8153% ( 91) 00:17:15.711 13908.514 - 13965.750: 34.5732% ( 65) 00:17:15.711 13965.750 - 14022.987: 35.2612% ( 59) 00:17:15.711 14022.987 - 14080.224: 35.8559% ( 51) 00:17:15.711 14080.224 - 14137.460: 36.4389% ( 50) 00:17:15.711 14137.460 - 14194.697: 37.1968% ( 65) 00:17:15.711 14194.697 - 14251.934: 38.1996% ( 86) 00:17:15.711 14251.934 - 14309.170: 39.0392% ( 72) 00:17:15.711 14309.170 - 14366.407: 39.9370% ( 77) 00:17:15.711 14366.407 - 14423.644: 40.7766% ( 72) 00:17:15.711 14423.644 - 14480.880: 41.6278% ( 73) 00:17:15.711 14480.880 - 14538.117: 42.3158% ( 59) 00:17:15.711 14538.117 - 14595.354: 43.2020% ( 76) 00:17:15.711 14595.354 - 14652.590: 43.9599% ( 65) 00:17:15.711 14652.590 - 14767.064: 45.9072% ( 167) 00:17:15.711 14767.064 - 14881.537: 48.1343% ( 191) 00:17:15.711 14881.537 - 14996.010: 50.0350% ( 163) 00:17:15.711 14996.010 - 15110.484: 51.8307% ( 154) 00:17:15.711 15110.484 - 15224.957: 53.6847% ( 159) 00:17:15.711 15224.957 - 15339.431: 55.9235% ( 192) 00:17:15.711 15339.431 - 15453.904: 58.4422% ( 216) 00:17:15.711 15453.904 - 15568.377: 60.8792% ( 209) 00:17:15.711 15568.377 - 15682.851: 62.9198% ( 175) 00:17:15.711 15682.851 - 15797.324: 64.9254% ( 172) 00:17:15.711 15797.324 - 15911.797: 67.4207% ( 214) 00:17:15.711 15911.797 - 16026.271: 70.3358% ( 250) 00:17:15.711 16026.271 - 16140.744: 72.5163% ( 187) 00:17:15.711 16140.744 - 16255.217: 74.3354% ( 156) 00:17:15.711 16255.217 - 16369.691: 76.0494% ( 147) 00:17:15.711 16369.691 - 16484.164: 77.6936% ( 141) 00:17:15.711 16484.164 - 16598.638: 79.6642% ( 169) 00:17:15.711 16598.638 - 16713.111: 81.6348% ( 169) 00:17:15.711 16713.111 - 16827.584: 83.4655% ( 157) 00:17:15.711 16827.584 - 16942.058: 85.0163% ( 133) 00:17:15.711 16942.058 - 17056.531: 86.8703% ( 159) 00:17:15.711 17056.531 - 17171.004: 88.3279% ( 125) 00:17:15.711 17171.004 - 17285.478: 89.7271% ( 120) 00:17:15.711 17285.478 - 17399.951: 90.8699% ( 98) 00:17:15.711 17399.951 - 17514.424: 91.7677% ( 77) 00:17:15.711 17514.424 - 17628.898: 92.4674% ( 60) 00:17:15.711 17628.898 - 17743.371: 93.1087% ( 55) 00:17:15.711 17743.371 - 17857.845: 93.7150% ( 52) 00:17:15.711 17857.845 - 17972.318: 94.4380% ( 62) 00:17:15.711 17972.318 - 18086.791: 95.1143% ( 58) 00:17:15.711 18086.791 - 18201.265: 95.7673% ( 56) 00:17:15.711 18201.265 - 18315.738: 96.1754% ( 35) 00:17:15.711 18315.738 - 18430.211: 96.7001% ( 45) 00:17:15.711 18430.211 - 18544.685: 97.2831% ( 50) 00:17:15.711 18544.685 - 18659.158: 97.7262% ( 38) 00:17:15.711 18659.158 - 18773.631: 98.2043% ( 41) 00:17:15.711 18773.631 - 18888.105: 98.4025% ( 17) 00:17:15.711 18888.105 - 19002.578: 98.4841% ( 7) 00:17:15.711 19002.578 - 19117.052: 98.5075% ( 2) 00:17:15.711 30220.968 - 30449.914: 98.5658% ( 5) 00:17:15.711 30449.914 - 30678.861: 98.6590% ( 8) 00:17:15.711 30678.861 - 30907.808: 98.7640% ( 9) 00:17:15.711 30907.808 - 31136.755: 98.8689% ( 9) 00:17:15.711 31136.755 - 31365.701: 98.9739% ( 9) 00:17:15.711 31365.701 - 31594.648: 99.0788% ( 9) 00:17:15.711 31594.648 - 31823.595: 99.1721% ( 8) 00:17:15.711 31823.595 - 32052.541: 99.2537% ( 7) 00:17:15.711 40065.677 - 40294.624: 99.2654% ( 1) 00:17:15.711 40294.624 - 40523.570: 99.3237% ( 5) 00:17:15.711 40523.570 - 40752.517: 99.3820% ( 5) 00:17:15.711 40752.517 - 40981.464: 99.4403% ( 5) 00:17:15.711 40981.464 - 41210.410: 99.4986% ( 5) 00:17:15.711 41210.410 - 41439.357: 99.5569% ( 5) 00:17:15.711 41439.357 - 41668.304: 99.6152% ( 5) 00:17:15.711 41668.304 - 41897.251: 99.6735% ( 5) 00:17:15.711 41897.251 - 42126.197: 99.7318% ( 5) 00:17:15.711 42126.197 - 42355.144: 99.7901% ( 5) 00:17:15.711 42355.144 - 42584.091: 99.8601% ( 6) 00:17:15.711 42584.091 - 42813.038: 99.9184% ( 5) 00:17:15.711 42813.038 - 43041.984: 99.9767% ( 5) 00:17:15.711 43041.984 - 43270.931: 100.0000% ( 2) 00:17:15.711 00:17:15.711 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:17:15.711 ============================================================================== 00:17:15.711 Range in us Cumulative IO count 00:17:15.711 9844.709 - 9901.946: 0.1049% ( 9) 00:17:15.711 9901.946 - 9959.183: 0.2332% ( 11) 00:17:15.711 9959.183 - 10016.419: 0.3848% ( 13) 00:17:15.711 10016.419 - 10073.656: 0.5947% ( 18) 00:17:15.711 10073.656 - 10130.893: 0.8979% ( 26) 00:17:15.711 10130.893 - 10188.129: 1.1894% ( 25) 00:17:15.711 10188.129 - 10245.366: 1.5042% ( 27) 00:17:15.711 10245.366 - 10302.603: 1.8424% ( 29) 00:17:15.711 10302.603 - 10359.839: 2.2155% ( 32) 00:17:15.711 10359.839 - 10417.076: 2.6119% ( 34) 00:17:15.711 10417.076 - 10474.313: 3.1017% ( 42) 00:17:15.711 10474.313 - 10531.549: 3.4748% ( 32) 00:17:15.711 10531.549 - 10588.786: 4.0229% ( 47) 00:17:15.711 10588.786 - 10646.023: 4.5126% ( 42) 00:17:15.711 10646.023 - 10703.259: 4.7341% ( 19) 00:17:15.711 10703.259 - 10760.496: 4.9440% ( 18) 00:17:15.711 10760.496 - 10817.733: 5.1772% ( 20) 00:17:15.711 10817.733 - 10874.969: 5.3521% ( 15) 00:17:15.711 10874.969 - 10932.206: 5.5504% ( 17) 00:17:15.711 10932.206 - 10989.443: 5.8186% ( 23) 00:17:15.711 10989.443 - 11046.679: 5.9818% ( 14) 00:17:15.711 11046.679 - 11103.916: 6.1101% ( 11) 00:17:15.712 11103.916 - 11161.153: 6.1800% ( 6) 00:17:15.712 11161.153 - 11218.390: 6.2850% ( 9) 00:17:15.712 11218.390 - 11275.626: 6.3899% ( 9) 00:17:15.712 11275.626 - 11332.863: 6.5065% ( 10) 00:17:15.712 11332.863 - 11390.100: 6.6231% ( 10) 00:17:15.712 11390.100 - 11447.336: 6.8680% ( 21) 00:17:15.712 11447.336 - 11504.573: 7.1595% ( 25) 00:17:15.712 11504.573 - 11561.810: 7.4860% ( 28) 00:17:15.712 11561.810 - 11619.046: 8.0690% ( 50) 00:17:15.712 11619.046 - 11676.283: 8.7687% ( 60) 00:17:15.712 11676.283 - 11733.520: 9.6898% ( 79) 00:17:15.712 11733.520 - 11790.756: 10.3428% ( 56) 00:17:15.712 11790.756 - 11847.993: 11.0075% ( 57) 00:17:15.712 11847.993 - 11905.230: 11.6721% ( 57) 00:17:15.712 11905.230 - 11962.466: 12.3368% ( 57) 00:17:15.712 11962.466 - 12019.703: 12.9081% ( 49) 00:17:15.712 12019.703 - 12076.940: 13.5028% ( 51) 00:17:15.712 12076.940 - 12134.176: 14.0159% ( 44) 00:17:15.712 12134.176 - 12191.413: 14.5522% ( 46) 00:17:15.712 12191.413 - 12248.650: 15.1003% ( 47) 00:17:15.712 12248.650 - 12305.886: 15.5784% ( 41) 00:17:15.712 12305.886 - 12363.123: 16.1847% ( 52) 00:17:15.712 12363.123 - 12420.360: 16.9776% ( 68) 00:17:15.712 12420.360 - 12477.597: 17.4790% ( 43) 00:17:15.712 12477.597 - 12534.833: 18.1087% ( 54) 00:17:15.712 12534.833 - 12592.070: 18.7850% ( 58) 00:17:15.712 12592.070 - 12649.307: 19.4846% ( 60) 00:17:15.712 12649.307 - 12706.543: 20.1143% ( 54) 00:17:15.712 12706.543 - 12763.780: 20.8022% ( 59) 00:17:15.712 12763.780 - 12821.017: 21.8750% ( 92) 00:17:15.712 12821.017 - 12878.253: 22.6446% ( 66) 00:17:15.712 12878.253 - 12935.490: 23.1693% ( 45) 00:17:15.712 12935.490 - 12992.727: 23.7523% ( 50) 00:17:15.712 12992.727 - 13049.963: 24.4520% ( 60) 00:17:15.712 13049.963 - 13107.200: 25.1982% ( 64) 00:17:15.712 13107.200 - 13164.437: 25.9562% ( 65) 00:17:15.712 13164.437 - 13221.673: 26.5742% ( 53) 00:17:15.712 13221.673 - 13278.910: 27.1105% ( 46) 00:17:15.712 13278.910 - 13336.147: 27.5886% ( 41) 00:17:15.712 13336.147 - 13393.383: 27.9384% ( 30) 00:17:15.712 13393.383 - 13450.620: 28.3582% ( 36) 00:17:15.712 13450.620 - 13507.857: 28.7896% ( 37) 00:17:15.712 13507.857 - 13565.093: 29.2677% ( 41) 00:17:15.712 13565.093 - 13622.330: 29.7808% ( 44) 00:17:15.712 13622.330 - 13679.567: 30.5737% ( 68) 00:17:15.712 13679.567 - 13736.803: 31.2617% ( 59) 00:17:15.712 13736.803 - 13794.040: 32.2878% ( 88) 00:17:15.712 13794.040 - 13851.277: 33.1623% ( 75) 00:17:15.712 13851.277 - 13908.514: 34.0368% ( 75) 00:17:15.712 13908.514 - 13965.750: 34.7948% ( 65) 00:17:15.712 13965.750 - 14022.987: 35.4594% ( 57) 00:17:15.712 14022.987 - 14080.224: 36.2873% ( 71) 00:17:15.712 14080.224 - 14137.460: 37.2085% ( 79) 00:17:15.712 14137.460 - 14194.697: 38.2229% ( 87) 00:17:15.712 14194.697 - 14251.934: 39.3190% ( 94) 00:17:15.712 14251.934 - 14309.170: 40.4501% ( 97) 00:17:15.712 14309.170 - 14366.407: 41.5229% ( 92) 00:17:15.712 14366.407 - 14423.644: 42.6189% ( 94) 00:17:15.712 14423.644 - 14480.880: 43.7150% ( 94) 00:17:15.712 14480.880 - 14538.117: 44.9044% ( 102) 00:17:15.712 14538.117 - 14595.354: 45.8955% ( 85) 00:17:15.712 14595.354 - 14652.590: 46.7467% ( 73) 00:17:15.712 14652.590 - 14767.064: 48.3209% ( 135) 00:17:15.712 14767.064 - 14881.537: 50.2332% ( 164) 00:17:15.712 14881.537 - 14996.010: 52.0989% ( 160) 00:17:15.712 14996.010 - 15110.484: 53.9762% ( 161) 00:17:15.712 15110.484 - 15224.957: 55.8885% ( 164) 00:17:15.712 15224.957 - 15339.431: 57.6143% ( 148) 00:17:15.712 15339.431 - 15453.904: 59.1418% ( 131) 00:17:15.712 15453.904 - 15568.377: 60.5061% ( 117) 00:17:15.712 15568.377 - 15682.851: 62.0919% ( 136) 00:17:15.712 15682.851 - 15797.324: 63.9459% ( 159) 00:17:15.712 15797.324 - 15911.797: 65.4268% ( 127) 00:17:15.712 15911.797 - 16026.271: 67.3857% ( 168) 00:17:15.712 16026.271 - 16140.744: 69.4729% ( 179) 00:17:15.712 16140.744 - 16255.217: 71.2337% ( 151) 00:17:15.712 16255.217 - 16369.691: 72.9827% ( 150) 00:17:15.712 16369.691 - 16484.164: 75.0233% ( 175) 00:17:15.712 16484.164 - 16598.638: 77.0056% ( 170) 00:17:15.712 16598.638 - 16713.111: 79.6642% ( 228) 00:17:15.712 16713.111 - 16827.584: 81.7281% ( 177) 00:17:15.712 16827.584 - 16942.058: 83.3839% ( 142) 00:17:15.712 16942.058 - 17056.531: 84.9580% ( 135) 00:17:15.712 17056.531 - 17171.004: 86.5089% ( 133) 00:17:15.712 17171.004 - 17285.478: 88.3745% ( 160) 00:17:15.712 17285.478 - 17399.951: 89.8438% ( 126) 00:17:15.712 17399.951 - 17514.424: 90.9398% ( 94) 00:17:15.712 17514.424 - 17628.898: 91.8610% ( 79) 00:17:15.712 17628.898 - 17743.371: 92.3741% ( 44) 00:17:15.712 17743.371 - 17857.845: 92.7938% ( 36) 00:17:15.712 17857.845 - 17972.318: 93.2952% ( 43) 00:17:15.712 17972.318 - 18086.791: 94.1465% ( 73) 00:17:15.712 18086.791 - 18201.265: 94.5779% ( 37) 00:17:15.712 18201.265 - 18315.738: 94.8577% ( 24) 00:17:15.712 18315.738 - 18430.211: 95.3475% ( 42) 00:17:15.712 18430.211 - 18544.685: 95.8139% ( 40) 00:17:15.712 18544.685 - 18659.158: 96.2220% ( 35) 00:17:15.712 18659.158 - 18773.631: 96.4902% ( 23) 00:17:15.712 18773.631 - 18888.105: 96.6884% ( 17) 00:17:15.712 18888.105 - 19002.578: 96.8983% ( 18) 00:17:15.712 19002.578 - 19117.052: 97.1549% ( 22) 00:17:15.712 19117.052 - 19231.525: 97.4464% ( 25) 00:17:15.712 19231.525 - 19345.998: 97.7845% ( 29) 00:17:15.712 19345.998 - 19460.472: 97.9594% ( 15) 00:17:15.712 19460.472 - 19574.945: 98.2626% ( 26) 00:17:15.712 19574.945 - 19689.418: 98.5075% ( 21) 00:17:15.712 28847.287 - 28961.761: 98.5191% ( 1) 00:17:15.712 28961.761 - 29076.234: 98.5541% ( 3) 00:17:15.712 29076.234 - 29190.707: 98.6007% ( 4) 00:17:15.712 29190.707 - 29305.181: 98.6590% ( 5) 00:17:15.712 29305.181 - 29534.128: 98.7640% ( 9) 00:17:15.712 29534.128 - 29763.074: 98.8573% ( 8) 00:17:15.712 29763.074 - 29992.021: 98.9506% ( 8) 00:17:15.712 29992.021 - 30220.968: 99.0555% ( 9) 00:17:15.712 30220.968 - 30449.914: 99.1488% ( 8) 00:17:15.712 30449.914 - 30678.861: 99.2421% ( 8) 00:17:15.712 30678.861 - 30907.808: 99.2537% ( 1) 00:17:15.712 39378.837 - 39607.783: 99.3120% ( 5) 00:17:15.712 39607.783 - 39836.730: 99.3703% ( 5) 00:17:15.712 39836.730 - 40065.677: 99.4286% ( 5) 00:17:15.712 40065.677 - 40294.624: 99.4869% ( 5) 00:17:15.712 40294.624 - 40523.570: 99.5336% ( 4) 00:17:15.712 40523.570 - 40752.517: 99.5919% ( 5) 00:17:15.712 40752.517 - 40981.464: 99.6618% ( 6) 00:17:15.712 40981.464 - 41210.410: 99.7201% ( 5) 00:17:15.712 41210.410 - 41439.357: 99.7785% ( 5) 00:17:15.712 41439.357 - 41668.304: 99.8368% ( 5) 00:17:15.712 41668.304 - 41897.251: 99.9067% ( 6) 00:17:15.712 41897.251 - 42126.197: 99.9767% ( 6) 00:17:15.712 42126.197 - 42355.144: 100.0000% ( 2) 00:17:15.712 00:17:15.712 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:17:15.712 ============================================================================== 00:17:15.712 Range in us Cumulative IO count 00:17:15.712 9615.762 - 9672.999: 0.0117% ( 1) 00:17:15.712 9672.999 - 9730.236: 0.0233% ( 1) 00:17:15.712 9730.236 - 9787.472: 0.0350% ( 1) 00:17:15.712 9787.472 - 9844.709: 0.0583% ( 2) 00:17:15.712 9844.709 - 9901.946: 0.1283% ( 6) 00:17:15.712 9901.946 - 9959.183: 0.3265% ( 17) 00:17:15.712 9959.183 - 10016.419: 0.5597% ( 20) 00:17:15.712 10016.419 - 10073.656: 0.8162% ( 22) 00:17:15.712 10073.656 - 10130.893: 1.2477% ( 37) 00:17:15.712 10130.893 - 10188.129: 1.6791% ( 37) 00:17:15.712 10188.129 - 10245.366: 2.0173% ( 29) 00:17:15.712 10245.366 - 10302.603: 2.4137% ( 34) 00:17:15.712 10302.603 - 10359.839: 2.7635% ( 30) 00:17:15.712 10359.839 - 10417.076: 2.9734% ( 18) 00:17:15.712 10417.076 - 10474.313: 3.1367% ( 14) 00:17:15.712 10474.313 - 10531.549: 3.3232% ( 16) 00:17:15.712 10531.549 - 10588.786: 3.4398% ( 10) 00:17:15.712 10588.786 - 10646.023: 3.5331% ( 8) 00:17:15.712 10646.023 - 10703.259: 3.6497% ( 10) 00:17:15.712 10703.259 - 10760.496: 3.7663% ( 10) 00:17:15.712 10760.496 - 10817.733: 3.9296% ( 14) 00:17:15.712 10817.733 - 10874.969: 4.1045% ( 15) 00:17:15.712 10874.969 - 10932.206: 4.2910% ( 16) 00:17:15.712 10932.206 - 10989.443: 4.6758% ( 33) 00:17:15.712 10989.443 - 11046.679: 4.8857% ( 18) 00:17:15.712 11046.679 - 11103.916: 5.0373% ( 13) 00:17:15.712 11103.916 - 11161.153: 5.2705% ( 20) 00:17:15.712 11161.153 - 11218.390: 5.7136% ( 38) 00:17:15.712 11218.390 - 11275.626: 6.0984% ( 33) 00:17:15.712 11275.626 - 11332.863: 6.4366% ( 29) 00:17:15.712 11332.863 - 11390.100: 6.9729% ( 46) 00:17:15.712 11390.100 - 11447.336: 7.4044% ( 37) 00:17:15.712 11447.336 - 11504.573: 7.9174% ( 44) 00:17:15.712 11504.573 - 11561.810: 8.4771% ( 48) 00:17:15.712 11561.810 - 11619.046: 8.8386% ( 31) 00:17:15.712 11619.046 - 11676.283: 9.3983% ( 48) 00:17:15.712 11676.283 - 11733.520: 10.0163% ( 53) 00:17:15.712 11733.520 - 11790.756: 10.7743% ( 65) 00:17:15.712 11790.756 - 11847.993: 11.3573% ( 50) 00:17:15.712 11847.993 - 11905.230: 11.9520% ( 51) 00:17:15.712 11905.230 - 11962.466: 12.3834% ( 37) 00:17:15.712 11962.466 - 12019.703: 12.7215% ( 29) 00:17:15.712 12019.703 - 12076.940: 13.0597% ( 29) 00:17:15.712 12076.940 - 12134.176: 13.4328% ( 32) 00:17:15.712 12134.176 - 12191.413: 13.8643% ( 37) 00:17:15.712 12191.413 - 12248.650: 14.3074% ( 38) 00:17:15.712 12248.650 - 12305.886: 14.8438% ( 46) 00:17:15.712 12305.886 - 12363.123: 15.2985% ( 39) 00:17:15.712 12363.123 - 12420.360: 15.8116% ( 44) 00:17:15.712 12420.360 - 12477.597: 16.1964% ( 33) 00:17:15.712 12477.597 - 12534.833: 16.5112% ( 27) 00:17:15.713 12534.833 - 12592.070: 17.0476% ( 46) 00:17:15.713 12592.070 - 12649.307: 17.4674% ( 36) 00:17:15.713 12649.307 - 12706.543: 17.8755% ( 35) 00:17:15.713 12706.543 - 12763.780: 18.5634% ( 59) 00:17:15.713 12763.780 - 12821.017: 19.1814% ( 53) 00:17:15.713 12821.017 - 12878.253: 20.0210% ( 72) 00:17:15.713 12878.253 - 12935.490: 20.6040% ( 50) 00:17:15.713 12935.490 - 12992.727: 21.2453% ( 55) 00:17:15.713 12992.727 - 13049.963: 22.0149% ( 66) 00:17:15.713 13049.963 - 13107.200: 22.7729% ( 65) 00:17:15.713 13107.200 - 13164.437: 23.6241% ( 73) 00:17:15.713 13164.437 - 13221.673: 24.4170% ( 68) 00:17:15.713 13221.673 - 13278.910: 25.1283% ( 61) 00:17:15.713 13278.910 - 13336.147: 25.7696% ( 55) 00:17:15.713 13336.147 - 13393.383: 26.5392% ( 66) 00:17:15.713 13393.383 - 13450.620: 27.3904% ( 73) 00:17:15.713 13450.620 - 13507.857: 28.2649% ( 75) 00:17:15.713 13507.857 - 13565.093: 29.2211% ( 82) 00:17:15.713 13565.093 - 13622.330: 30.4104% ( 102) 00:17:15.713 13622.330 - 13679.567: 31.2500% ( 72) 00:17:15.713 13679.567 - 13736.803: 32.1012% ( 73) 00:17:15.713 13736.803 - 13794.040: 32.8825% ( 67) 00:17:15.713 13794.040 - 13851.277: 33.5821% ( 60) 00:17:15.713 13851.277 - 13908.514: 34.3867% ( 69) 00:17:15.713 13908.514 - 13965.750: 35.2729% ( 76) 00:17:15.713 13965.750 - 14022.987: 36.0424% ( 66) 00:17:15.713 14022.987 - 14080.224: 37.2435% ( 103) 00:17:15.713 14080.224 - 14137.460: 38.4445% ( 103) 00:17:15.713 14137.460 - 14194.697: 39.2957% ( 73) 00:17:15.713 14194.697 - 14251.934: 40.3685% ( 92) 00:17:15.713 14251.934 - 14309.170: 41.2430% ( 75) 00:17:15.713 14309.170 - 14366.407: 42.0009% ( 65) 00:17:15.713 14366.407 - 14423.644: 42.7122% ( 61) 00:17:15.713 14423.644 - 14480.880: 43.4002% ( 59) 00:17:15.713 14480.880 - 14538.117: 44.2631% ( 74) 00:17:15.713 14538.117 - 14595.354: 45.0560% ( 68) 00:17:15.713 14595.354 - 14652.590: 45.8605% ( 69) 00:17:15.713 14652.590 - 14767.064: 47.9594% ( 180) 00:17:15.713 14767.064 - 14881.537: 49.7901% ( 157) 00:17:15.713 14881.537 - 14996.010: 51.5742% ( 153) 00:17:15.713 14996.010 - 15110.484: 52.9268% ( 116) 00:17:15.713 15110.484 - 15224.957: 54.4076% ( 127) 00:17:15.713 15224.957 - 15339.431: 56.3316% ( 165) 00:17:15.713 15339.431 - 15453.904: 58.4072% ( 178) 00:17:15.713 15453.904 - 15568.377: 60.6110% ( 189) 00:17:15.713 15568.377 - 15682.851: 62.6516% ( 175) 00:17:15.713 15682.851 - 15797.324: 64.5989% ( 167) 00:17:15.713 15797.324 - 15911.797: 66.4762% ( 161) 00:17:15.713 15911.797 - 16026.271: 68.2836% ( 155) 00:17:15.713 16026.271 - 16140.744: 70.4174% ( 183) 00:17:15.713 16140.744 - 16255.217: 71.8517% ( 123) 00:17:15.713 16255.217 - 16369.691: 73.4025% ( 133) 00:17:15.713 16369.691 - 16484.164: 75.4314% ( 174) 00:17:15.713 16484.164 - 16598.638: 78.0201% ( 222) 00:17:15.713 16598.638 - 16713.111: 80.6670% ( 227) 00:17:15.713 16713.111 - 16827.584: 83.0690% ( 206) 00:17:15.713 16827.584 - 16942.058: 85.0979% ( 174) 00:17:15.713 16942.058 - 17056.531: 86.6255% ( 131) 00:17:15.713 17056.531 - 17171.004: 87.8965% ( 109) 00:17:15.713 17171.004 - 17285.478: 89.1791% ( 110) 00:17:15.713 17285.478 - 17399.951: 90.1236% ( 81) 00:17:15.713 17399.951 - 17514.424: 90.9632% ( 72) 00:17:15.713 17514.424 - 17628.898: 91.9076% ( 81) 00:17:15.713 17628.898 - 17743.371: 93.0854% ( 101) 00:17:15.713 17743.371 - 17857.845: 94.2864% ( 103) 00:17:15.713 17857.845 - 17972.318: 94.9277% ( 55) 00:17:15.713 17972.318 - 18086.791: 95.1376% ( 18) 00:17:15.713 18086.791 - 18201.265: 95.2425% ( 9) 00:17:15.713 18201.265 - 18315.738: 95.4874% ( 21) 00:17:15.713 18315.738 - 18430.211: 95.7906% ( 26) 00:17:15.713 18430.211 - 18544.685: 96.2337% ( 38) 00:17:15.713 18544.685 - 18659.158: 96.6651% ( 37) 00:17:15.713 18659.158 - 18773.631: 96.9683% ( 26) 00:17:15.713 18773.631 - 18888.105: 97.2015% ( 20) 00:17:15.713 18888.105 - 19002.578: 97.5280% ( 28) 00:17:15.713 19002.578 - 19117.052: 97.7612% ( 20) 00:17:15.713 19117.052 - 19231.525: 97.8778% ( 10) 00:17:15.713 19231.525 - 19345.998: 97.9827% ( 9) 00:17:15.713 19345.998 - 19460.472: 98.2743% ( 25) 00:17:15.713 19460.472 - 19574.945: 98.3792% ( 9) 00:17:15.713 19574.945 - 19689.418: 98.4725% ( 8) 00:17:15.713 19689.418 - 19803.892: 98.5075% ( 3) 00:17:15.713 27244.660 - 27359.134: 98.5424% ( 3) 00:17:15.713 27359.134 - 27473.607: 98.5891% ( 4) 00:17:15.713 27473.607 - 27588.080: 98.6357% ( 4) 00:17:15.713 27588.080 - 27702.554: 98.6824% ( 4) 00:17:15.713 27702.554 - 27817.027: 98.7407% ( 5) 00:17:15.713 27817.027 - 27931.500: 98.7873% ( 4) 00:17:15.713 27931.500 - 28045.974: 98.8340% ( 4) 00:17:15.713 28045.974 - 28160.447: 98.8923% ( 5) 00:17:15.713 28160.447 - 28274.921: 98.9389% ( 4) 00:17:15.713 28274.921 - 28389.394: 98.9972% ( 5) 00:17:15.713 28389.394 - 28503.867: 99.0438% ( 4) 00:17:15.713 28503.867 - 28618.341: 99.0905% ( 4) 00:17:15.713 28618.341 - 28732.814: 99.1488% ( 5) 00:17:15.713 28732.814 - 28847.287: 99.1954% ( 4) 00:17:15.713 28847.287 - 28961.761: 99.2421% ( 4) 00:17:15.713 28961.761 - 29076.234: 99.2537% ( 1) 00:17:15.713 38463.050 - 38691.997: 99.3004% ( 4) 00:17:15.713 38691.997 - 38920.943: 99.3587% ( 5) 00:17:15.713 38920.943 - 39149.890: 99.4170% ( 5) 00:17:15.713 39149.890 - 39378.837: 99.4753% ( 5) 00:17:15.713 39378.837 - 39607.783: 99.5336% ( 5) 00:17:15.713 39607.783 - 39836.730: 99.5919% ( 5) 00:17:15.713 39836.730 - 40065.677: 99.6502% ( 5) 00:17:15.713 40065.677 - 40294.624: 99.7201% ( 6) 00:17:15.713 40294.624 - 40523.570: 99.7785% ( 5) 00:17:15.713 40523.570 - 40752.517: 99.8484% ( 6) 00:17:15.713 40752.517 - 40981.464: 99.9067% ( 5) 00:17:15.713 40981.464 - 41210.410: 99.9650% ( 5) 00:17:15.713 41210.410 - 41439.357: 100.0000% ( 3) 00:17:15.713 00:17:15.713 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:17:15.713 ============================================================================== 00:17:15.713 Range in us Cumulative IO count 00:17:15.713 9844.709 - 9901.946: 0.0579% ( 5) 00:17:15.713 9901.946 - 9959.183: 0.1042% ( 4) 00:17:15.713 9959.183 - 10016.419: 0.1620% ( 5) 00:17:15.713 10016.419 - 10073.656: 0.3241% ( 14) 00:17:15.713 10073.656 - 10130.893: 0.6366% ( 27) 00:17:15.713 10130.893 - 10188.129: 0.9375% ( 26) 00:17:15.713 10188.129 - 10245.366: 1.2269% ( 25) 00:17:15.713 10245.366 - 10302.603: 1.6667% ( 38) 00:17:15.713 10302.603 - 10359.839: 2.0602% ( 34) 00:17:15.713 10359.839 - 10417.076: 2.4190% ( 31) 00:17:15.713 10417.076 - 10474.313: 2.6620% ( 21) 00:17:15.713 10474.313 - 10531.549: 2.8472% ( 16) 00:17:15.713 10531.549 - 10588.786: 3.1019% ( 22) 00:17:15.713 10588.786 - 10646.023: 3.5185% ( 36) 00:17:15.713 10646.023 - 10703.259: 3.6343% ( 10) 00:17:15.713 10703.259 - 10760.496: 3.7153% ( 7) 00:17:15.713 10760.496 - 10817.733: 3.7616% ( 4) 00:17:15.713 10817.733 - 10874.969: 3.8542% ( 8) 00:17:15.713 10874.969 - 10932.206: 3.9815% ( 11) 00:17:15.713 10932.206 - 10989.443: 4.0509% ( 6) 00:17:15.713 10989.443 - 11046.679: 4.1782% ( 11) 00:17:15.713 11046.679 - 11103.916: 4.3056% ( 11) 00:17:15.713 11103.916 - 11161.153: 4.4213% ( 10) 00:17:15.713 11161.153 - 11218.390: 4.7801% ( 31) 00:17:15.713 11218.390 - 11275.626: 4.9769% ( 17) 00:17:15.713 11275.626 - 11332.863: 5.2778% ( 26) 00:17:15.713 11332.863 - 11390.100: 5.8912% ( 53) 00:17:15.713 11390.100 - 11447.336: 6.5509% ( 57) 00:17:15.713 11447.336 - 11504.573: 7.1296% ( 50) 00:17:15.713 11504.573 - 11561.810: 7.7778% ( 56) 00:17:15.713 11561.810 - 11619.046: 8.5069% ( 63) 00:17:15.714 11619.046 - 11676.283: 9.0972% ( 51) 00:17:15.714 11676.283 - 11733.520: 9.7917% ( 60) 00:17:15.714 11733.520 - 11790.756: 10.3356% ( 47) 00:17:15.714 11790.756 - 11847.993: 10.7060% ( 32) 00:17:15.714 11847.993 - 11905.230: 11.0995% ( 34) 00:17:15.714 11905.230 - 11962.466: 11.5972% ( 43) 00:17:15.714 11962.466 - 12019.703: 12.1875% ( 51) 00:17:15.714 12019.703 - 12076.940: 12.6042% ( 36) 00:17:15.714 12076.940 - 12134.176: 12.9861% ( 33) 00:17:15.714 12134.176 - 12191.413: 13.3102% ( 28) 00:17:15.714 12191.413 - 12248.650: 13.7153% ( 35) 00:17:15.714 12248.650 - 12305.886: 14.2477% ( 46) 00:17:15.714 12305.886 - 12363.123: 14.8727% ( 54) 00:17:15.714 12363.123 - 12420.360: 15.4282% ( 48) 00:17:15.714 12420.360 - 12477.597: 16.0417% ( 53) 00:17:15.714 12477.597 - 12534.833: 16.4699% ( 37) 00:17:15.714 12534.833 - 12592.070: 16.8866% ( 36) 00:17:15.714 12592.070 - 12649.307: 17.3380% ( 39) 00:17:15.714 12649.307 - 12706.543: 17.7894% ( 39) 00:17:15.714 12706.543 - 12763.780: 18.4028% ( 53) 00:17:15.714 12763.780 - 12821.017: 18.9931% ( 51) 00:17:15.714 12821.017 - 12878.253: 19.6296% ( 55) 00:17:15.714 12878.253 - 12935.490: 20.2662% ( 55) 00:17:15.714 12935.490 - 12992.727: 20.8218% ( 48) 00:17:15.714 12992.727 - 13049.963: 21.3657% ( 47) 00:17:15.714 13049.963 - 13107.200: 21.9676% ( 52) 00:17:15.714 13107.200 - 13164.437: 22.6389% ( 58) 00:17:15.714 13164.437 - 13221.673: 23.6574% ( 88) 00:17:15.714 13221.673 - 13278.910: 24.7454% ( 94) 00:17:15.714 13278.910 - 13336.147: 25.6366% ( 77) 00:17:15.714 13336.147 - 13393.383: 26.3079% ( 58) 00:17:15.714 13393.383 - 13450.620: 26.9560% ( 56) 00:17:15.714 13450.620 - 13507.857: 27.6620% ( 61) 00:17:15.714 13507.857 - 13565.093: 28.3681% ( 61) 00:17:15.714 13565.093 - 13622.330: 29.3056% ( 81) 00:17:15.714 13622.330 - 13679.567: 30.2199% ( 79) 00:17:15.714 13679.567 - 13736.803: 31.0995% ( 76) 00:17:15.714 13736.803 - 13794.040: 31.8287% ( 63) 00:17:15.714 13794.040 - 13851.277: 32.5000% ( 58) 00:17:15.714 13851.277 - 13908.514: 33.2060% ( 61) 00:17:15.714 13908.514 - 13965.750: 33.8542% ( 56) 00:17:15.714 13965.750 - 14022.987: 34.5486% ( 60) 00:17:15.714 14022.987 - 14080.224: 35.3125% ( 66) 00:17:15.714 14080.224 - 14137.460: 36.1458% ( 72) 00:17:15.714 14137.460 - 14194.697: 37.0602% ( 79) 00:17:15.714 14194.697 - 14251.934: 38.0093% ( 82) 00:17:15.714 14251.934 - 14309.170: 39.2361% ( 106) 00:17:15.714 14309.170 - 14366.407: 40.4745% ( 107) 00:17:15.714 14366.407 - 14423.644: 41.3773% ( 78) 00:17:15.714 14423.644 - 14480.880: 42.2801% ( 78) 00:17:15.714 14480.880 - 14538.117: 43.1134% ( 72) 00:17:15.714 14538.117 - 14595.354: 44.0046% ( 77) 00:17:15.714 14595.354 - 14652.590: 44.8843% ( 76) 00:17:15.714 14652.590 - 14767.064: 46.6898% ( 156) 00:17:15.714 14767.064 - 14881.537: 48.2870% ( 138) 00:17:15.714 14881.537 - 14996.010: 49.9653% ( 145) 00:17:15.714 14996.010 - 15110.484: 51.6898% ( 149) 00:17:15.714 15110.484 - 15224.957: 54.1551% ( 213) 00:17:15.714 15224.957 - 15339.431: 56.2500% ( 181) 00:17:15.714 15339.431 - 15453.904: 58.3912% ( 185) 00:17:15.714 15453.904 - 15568.377: 60.7755% ( 206) 00:17:15.714 15568.377 - 15682.851: 62.8935% ( 183) 00:17:15.714 15682.851 - 15797.324: 65.0579% ( 187) 00:17:15.714 15797.324 - 15911.797: 67.2454% ( 189) 00:17:15.714 15911.797 - 16026.271: 69.2477% ( 173) 00:17:15.714 16026.271 - 16140.744: 70.7986% ( 134) 00:17:15.714 16140.744 - 16255.217: 72.6273% ( 158) 00:17:15.714 16255.217 - 16369.691: 74.6644% ( 176) 00:17:15.714 16369.691 - 16484.164: 76.5856% ( 166) 00:17:15.714 16484.164 - 16598.638: 78.6574% ( 179) 00:17:15.714 16598.638 - 16713.111: 81.2847% ( 227) 00:17:15.714 16713.111 - 16827.584: 83.6806% ( 207) 00:17:15.714 16827.584 - 16942.058: 85.4977% ( 157) 00:17:15.714 16942.058 - 17056.531: 87.2106% ( 148) 00:17:15.714 17056.531 - 17171.004: 88.6458% ( 124) 00:17:15.714 17171.004 - 17285.478: 89.6296% ( 85) 00:17:15.714 17285.478 - 17399.951: 90.6250% ( 86) 00:17:15.714 17399.951 - 17514.424: 91.9676% ( 116) 00:17:15.714 17514.424 - 17628.898: 93.0903% ( 97) 00:17:15.714 17628.898 - 17743.371: 94.0162% ( 80) 00:17:15.714 17743.371 - 17857.845: 94.8264% ( 70) 00:17:15.714 17857.845 - 17972.318: 95.5903% ( 66) 00:17:15.714 17972.318 - 18086.791: 96.0995% ( 44) 00:17:15.714 18086.791 - 18201.265: 96.5856% ( 42) 00:17:15.714 18201.265 - 18315.738: 96.8981% ( 27) 00:17:15.714 18315.738 - 18430.211: 97.1412% ( 21) 00:17:15.714 18430.211 - 18544.685: 97.3495% ( 18) 00:17:15.714 18544.685 - 18659.158: 97.5116% ( 14) 00:17:15.714 18659.158 - 18773.631: 97.6736% ( 14) 00:17:15.714 18773.631 - 18888.105: 97.8125% ( 12) 00:17:15.714 18888.105 - 19002.578: 97.9167% ( 9) 00:17:15.714 19002.578 - 19117.052: 98.0440% ( 11) 00:17:15.714 19117.052 - 19231.525: 98.1597% ( 10) 00:17:15.714 19231.525 - 19345.998: 98.2292% ( 6) 00:17:15.714 19345.998 - 19460.472: 98.3102% ( 7) 00:17:15.714 19460.472 - 19574.945: 98.4028% ( 8) 00:17:15.714 19574.945 - 19689.418: 98.5417% ( 12) 00:17:15.714 19689.418 - 19803.892: 98.6343% ( 8) 00:17:15.714 19803.892 - 19918.365: 98.7500% ( 10) 00:17:15.714 19918.365 - 20032.838: 98.8773% ( 11) 00:17:15.714 20032.838 - 20147.312: 99.0394% ( 14) 00:17:15.714 20147.312 - 20261.785: 99.1319% ( 8) 00:17:15.714 20261.785 - 20376.259: 99.2130% ( 7) 00:17:15.714 20376.259 - 20490.732: 99.2593% ( 4) 00:17:15.714 27244.660 - 27359.134: 99.2824% ( 2) 00:17:15.714 27359.134 - 27473.607: 99.3056% ( 2) 00:17:15.714 27473.607 - 27588.080: 99.3403% ( 3) 00:17:15.714 27588.080 - 27702.554: 99.3750% ( 3) 00:17:15.714 27702.554 - 27817.027: 99.3981% ( 2) 00:17:15.714 27817.027 - 27931.500: 99.4329% ( 3) 00:17:15.714 27931.500 - 28045.974: 99.4560% ( 2) 00:17:15.714 28045.974 - 28160.447: 99.4907% ( 3) 00:17:15.714 28160.447 - 28274.921: 99.5139% ( 2) 00:17:15.714 28274.921 - 28389.394: 99.5370% ( 2) 00:17:15.714 28389.394 - 28503.867: 99.5718% ( 3) 00:17:15.714 28503.867 - 28618.341: 99.5949% ( 2) 00:17:15.714 28618.341 - 28732.814: 99.6296% ( 3) 00:17:15.714 28732.814 - 28847.287: 99.6644% ( 3) 00:17:15.714 28847.287 - 28961.761: 99.6875% ( 2) 00:17:15.714 28961.761 - 29076.234: 99.7222% ( 3) 00:17:15.714 29076.234 - 29190.707: 99.7454% ( 2) 00:17:15.714 29190.707 - 29305.181: 99.7801% ( 3) 00:17:15.714 29305.181 - 29534.128: 99.8495% ( 6) 00:17:15.714 29534.128 - 29763.074: 99.8958% ( 4) 00:17:15.714 29763.074 - 29992.021: 99.9653% ( 6) 00:17:15.714 29992.021 - 30220.968: 100.0000% ( 3) 00:17:15.714 00:17:15.714 09:31:16 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:17:15.714 00:17:15.714 real 0m2.566s 00:17:15.714 user 0m2.229s 00:17:15.714 sys 0m0.242s 00:17:15.714 09:31:16 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:15.714 09:31:16 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:17:15.714 ************************************ 00:17:15.714 END TEST nvme_perf 00:17:15.714 ************************************ 00:17:15.974 09:31:16 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:17:15.974 09:31:16 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:15.974 09:31:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:15.974 09:31:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.974 ************************************ 00:17:15.974 START TEST nvme_hello_world 00:17:15.974 ************************************ 00:17:15.974 09:31:16 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:17:15.974 Initializing NVMe Controllers 00:17:15.974 Attached to 0000:00:10.0 00:17:15.974 Namespace ID: 1 size: 6GB 00:17:15.974 Attached to 0000:00:11.0 00:17:15.974 Namespace ID: 1 size: 5GB 00:17:15.974 Attached to 0000:00:13.0 00:17:15.974 Namespace ID: 1 size: 1GB 00:17:15.974 Attached to 0000:00:12.0 00:17:15.974 Namespace ID: 1 size: 4GB 00:17:15.974 Namespace ID: 2 size: 4GB 00:17:15.974 Namespace ID: 3 size: 4GB 00:17:15.974 Initialization complete. 00:17:15.974 INFO: using host memory buffer for IO 00:17:15.974 Hello world! 00:17:15.974 INFO: using host memory buffer for IO 00:17:15.974 Hello world! 00:17:15.974 INFO: using host memory buffer for IO 00:17:15.974 Hello world! 00:17:15.974 INFO: using host memory buffer for IO 00:17:15.974 Hello world! 00:17:15.974 INFO: using host memory buffer for IO 00:17:15.974 Hello world! 00:17:15.974 INFO: using host memory buffer for IO 00:17:15.974 Hello world! 00:17:16.234 00:17:16.234 real 0m0.242s 00:17:16.234 user 0m0.097s 00:17:16.234 sys 0m0.107s 00:17:16.234 09:31:16 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.234 09:31:16 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:16.234 ************************************ 00:17:16.234 END TEST nvme_hello_world 00:17:16.234 ************************************ 00:17:16.234 09:31:16 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:17:16.234 09:31:16 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.234 09:31:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.234 09:31:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.234 ************************************ 00:17:16.234 START TEST nvme_sgl 00:17:16.234 ************************************ 00:17:16.234 09:31:16 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:17:16.494 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:17:16.494 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:17:16.494 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:17:16.494 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:17:16.494 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:17:16.494 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:17:16.494 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:17:16.494 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:17:16.494 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:17:16.494 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:17:16.494 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:17:16.494 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:17:16.494 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:17:16.494 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:17:16.494 NVMe Readv/Writev Request test 00:17:16.494 Attached to 0000:00:10.0 00:17:16.494 Attached to 0000:00:11.0 00:17:16.494 Attached to 0000:00:13.0 00:17:16.494 Attached to 0000:00:12.0 00:17:16.494 0000:00:10.0: build_io_request_2 test passed 00:17:16.494 0000:00:10.0: build_io_request_4 test passed 00:17:16.494 0000:00:10.0: build_io_request_5 test passed 00:17:16.494 0000:00:10.0: build_io_request_6 test passed 00:17:16.494 0000:00:10.0: build_io_request_7 test passed 00:17:16.494 0000:00:10.0: build_io_request_10 test passed 00:17:16.494 0000:00:11.0: build_io_request_2 test passed 00:17:16.494 0000:00:11.0: build_io_request_4 test passed 00:17:16.494 0000:00:11.0: build_io_request_5 test passed 00:17:16.494 0000:00:11.0: build_io_request_6 test passed 00:17:16.494 0000:00:11.0: build_io_request_7 test passed 00:17:16.494 0000:00:11.0: build_io_request_10 test passed 00:17:16.494 Cleaning up... 00:17:16.494 00:17:16.494 real 0m0.325s 00:17:16.494 user 0m0.156s 00:17:16.494 sys 0m0.123s 00:17:16.494 09:31:16 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.494 09:31:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:17:16.494 ************************************ 00:17:16.494 END TEST nvme_sgl 00:17:16.494 ************************************ 00:17:16.494 09:31:17 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:17:16.494 09:31:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.494 09:31:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.494 09:31:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.494 ************************************ 00:17:16.494 START TEST nvme_e2edp 00:17:16.494 ************************************ 00:17:16.494 09:31:17 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:17:16.754 NVMe Write/Read with End-to-End data protection test 00:17:16.754 Attached to 0000:00:10.0 00:17:16.754 Attached to 0000:00:11.0 00:17:16.754 Attached to 0000:00:13.0 00:17:16.754 Attached to 0000:00:12.0 00:17:16.754 Cleaning up... 00:17:16.754 00:17:16.754 real 0m0.227s 00:17:16.754 user 0m0.083s 00:17:16.754 sys 0m0.104s 00:17:16.754 09:31:17 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.754 09:31:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 ************************************ 00:17:16.754 END TEST nvme_e2edp 00:17:16.754 ************************************ 00:17:16.754 09:31:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:17:16.754 09:31:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:16.754 09:31:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.754 09:31:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 ************************************ 00:17:16.754 START TEST nvme_reserve 00:17:16.754 ************************************ 00:17:16.754 09:31:17 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:17:17.014 ===================================================== 00:17:17.014 NVMe Controller at PCI bus 0, device 16, function 0 00:17:17.014 ===================================================== 00:17:17.014 Reservations: Not Supported 00:17:17.014 ===================================================== 00:17:17.014 NVMe Controller at PCI bus 0, device 17, function 0 00:17:17.014 ===================================================== 00:17:17.014 Reservations: Not Supported 00:17:17.014 ===================================================== 00:17:17.014 NVMe Controller at PCI bus 0, device 19, function 0 00:17:17.014 ===================================================== 00:17:17.014 Reservations: Not Supported 00:17:17.014 ===================================================== 00:17:17.014 NVMe Controller at PCI bus 0, device 18, function 0 00:17:17.014 ===================================================== 00:17:17.014 Reservations: Not Supported 00:17:17.014 Reservation test passed 00:17:17.014 00:17:17.014 real 0m0.214s 00:17:17.014 user 0m0.068s 00:17:17.014 sys 0m0.102s 00:17:17.014 09:31:17 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.014 09:31:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:17:17.014 ************************************ 00:17:17.014 END TEST nvme_reserve 00:17:17.014 ************************************ 00:17:17.014 09:31:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:17:17.014 09:31:17 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:17.015 09:31:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.015 09:31:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.015 ************************************ 00:17:17.015 START TEST nvme_err_injection 00:17:17.015 ************************************ 00:17:17.015 09:31:17 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:17:17.274 NVMe Error Injection test 00:17:17.274 Attached to 0000:00:10.0 00:17:17.274 Attached to 0000:00:11.0 00:17:17.274 Attached to 0000:00:13.0 00:17:17.274 Attached to 0000:00:12.0 00:17:17.274 0000:00:10.0: get features failed as expected 00:17:17.274 0000:00:11.0: get features failed as expected 00:17:17.274 0000:00:13.0: get features failed as expected 00:17:17.274 0000:00:12.0: get features failed as expected 00:17:17.274 0000:00:12.0: get features successfully as expected 00:17:17.274 0000:00:10.0: get features successfully as expected 00:17:17.274 0000:00:11.0: get features successfully as expected 00:17:17.274 0000:00:13.0: get features successfully as expected 00:17:17.274 0000:00:10.0: read failed as expected 00:17:17.274 0000:00:13.0: read failed as expected 00:17:17.274 0000:00:11.0: read failed as expected 00:17:17.274 0000:00:12.0: read failed as expected 00:17:17.274 0000:00:10.0: read successfully as expected 00:17:17.274 0000:00:11.0: read successfully as expected 00:17:17.274 0000:00:13.0: read successfully as expected 00:17:17.274 0000:00:12.0: read successfully as expected 00:17:17.274 Cleaning up... 00:17:17.274 00:17:17.274 real 0m0.258s 00:17:17.274 user 0m0.108s 00:17:17.274 sys 0m0.103s 00:17:17.274 09:31:17 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.274 09:31:17 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:17:17.274 ************************************ 00:17:17.274 END TEST nvme_err_injection 00:17:17.274 ************************************ 00:17:17.533 09:31:17 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:17:17.533 09:31:17 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:17:17.533 09:31:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.533 09:31:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:17.533 ************************************ 00:17:17.533 START TEST nvme_overhead 00:17:17.533 ************************************ 00:17:17.533 09:31:17 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:17:18.913 Initializing NVMe Controllers 00:17:18.913 Attached to 0000:00:10.0 00:17:18.913 Attached to 0000:00:11.0 00:17:18.913 Attached to 0000:00:13.0 00:17:18.913 Attached to 0000:00:12.0 00:17:18.913 Initialization complete. Launching workers. 00:17:18.913 submit (in ns) avg, min, max = 12054.0, 8505.7, 67504.8 00:17:18.913 complete (in ns) avg, min, max = 6920.8, 5731.9, 26024.5 00:17:18.913 00:17:18.913 Submit histogram 00:17:18.913 ================ 00:17:18.913 Range in us Cumulative Count 00:17:18.913 8.496 - 8.552: 0.0130% ( 1) 00:17:18.913 10.229 - 10.285: 0.0259% ( 1) 00:17:18.913 10.285 - 10.341: 0.0519% ( 2) 00:17:18.913 10.341 - 10.397: 0.0648% ( 1) 00:17:18.913 10.397 - 10.452: 0.1426% ( 6) 00:17:18.913 10.452 - 10.508: 0.2593% ( 9) 00:17:18.913 10.508 - 10.564: 0.5574% ( 23) 00:17:18.913 10.564 - 10.620: 0.9982% ( 34) 00:17:18.913 10.620 - 10.676: 1.7630% ( 59) 00:17:18.913 10.676 - 10.732: 2.9168% ( 89) 00:17:18.913 10.732 - 10.788: 4.7576% ( 142) 00:17:18.913 10.788 - 10.844: 6.8706% ( 163) 00:17:18.913 10.844 - 10.900: 9.2818% ( 186) 00:17:18.913 10.900 - 10.955: 11.8356% ( 197) 00:17:18.913 10.955 - 11.011: 15.2969% ( 267) 00:17:18.913 11.011 - 11.067: 18.7322% ( 265) 00:17:18.913 11.067 - 11.123: 22.4397% ( 286) 00:17:18.913 11.123 - 11.179: 26.0565% ( 279) 00:17:18.913 11.179 - 11.235: 29.3622% ( 255) 00:17:18.913 11.235 - 11.291: 33.4327% ( 314) 00:17:18.913 11.291 - 11.347: 37.2310% ( 293) 00:17:18.913 11.347 - 11.403: 40.7441% ( 271) 00:17:18.913 11.403 - 11.459: 44.3350% ( 277) 00:17:18.913 11.459 - 11.514: 47.5499% ( 248) 00:17:18.913 11.514 - 11.570: 51.0371% ( 269) 00:17:18.913 11.570 - 11.626: 53.8372% ( 216) 00:17:18.913 11.626 - 11.682: 56.4558% ( 202) 00:17:18.913 11.682 - 11.738: 58.8022% ( 181) 00:17:18.913 11.738 - 11.794: 61.0837% ( 176) 00:17:18.913 11.794 - 11.850: 63.2227% ( 165) 00:17:18.913 11.850 - 11.906: 65.5691% ( 181) 00:17:18.913 11.906 - 11.962: 67.8377% ( 175) 00:17:18.913 11.962 - 12.017: 69.7174% ( 145) 00:17:18.913 12.017 - 12.073: 71.4415% ( 133) 00:17:18.913 12.073 - 12.129: 73.2435% ( 139) 00:17:18.913 12.129 - 12.185: 74.9546% ( 132) 00:17:18.913 12.185 - 12.241: 76.3028% ( 104) 00:17:18.913 12.241 - 12.297: 77.7677% ( 113) 00:17:18.913 12.297 - 12.353: 79.1807% ( 109) 00:17:18.913 12.353 - 12.409: 80.3733% ( 92) 00:17:18.913 12.409 - 12.465: 81.5012% ( 87) 00:17:18.913 12.465 - 12.521: 82.5901% ( 84) 00:17:18.913 12.521 - 12.576: 83.2383% ( 50) 00:17:18.913 12.576 - 12.632: 83.9642% ( 56) 00:17:18.913 12.632 - 12.688: 84.4957% ( 41) 00:17:18.913 12.688 - 12.744: 85.2476% ( 58) 00:17:18.913 12.744 - 12.800: 85.6235% ( 29) 00:17:18.913 12.800 - 12.856: 86.0254% ( 31) 00:17:18.913 12.856 - 12.912: 86.4662% ( 34) 00:17:18.913 12.912 - 12.968: 86.7514% ( 22) 00:17:18.913 12.968 - 13.024: 86.9588% ( 16) 00:17:18.913 13.024 - 13.079: 87.1921% ( 18) 00:17:18.913 13.079 - 13.135: 87.4125% ( 17) 00:17:18.913 13.135 - 13.191: 87.8403% ( 33) 00:17:18.913 13.191 - 13.247: 88.3848% ( 42) 00:17:18.913 13.247 - 13.303: 89.0329% ( 50) 00:17:18.913 13.303 - 13.359: 89.5774% ( 42) 00:17:18.913 13.359 - 13.415: 90.1478% ( 44) 00:17:18.913 13.415 - 13.471: 90.6404% ( 38) 00:17:18.913 13.471 - 13.527: 91.1071% ( 36) 00:17:18.913 13.527 - 13.583: 91.4441% ( 26) 00:17:18.913 13.583 - 13.638: 91.8460% ( 31) 00:17:18.913 13.638 - 13.694: 92.1960% ( 27) 00:17:18.913 13.694 - 13.750: 92.4942% ( 23) 00:17:18.913 13.750 - 13.806: 92.8312% ( 26) 00:17:18.913 13.806 - 13.862: 93.0386% ( 16) 00:17:18.913 13.862 - 13.918: 93.2201% ( 14) 00:17:18.913 13.918 - 13.974: 93.3757% ( 12) 00:17:18.913 13.974 - 14.030: 93.5183% ( 11) 00:17:18.913 14.030 - 14.086: 93.5831% ( 5) 00:17:18.913 14.086 - 14.141: 93.6349% ( 4) 00:17:18.913 14.141 - 14.197: 93.6868% ( 4) 00:17:18.913 14.197 - 14.253: 93.6998% ( 1) 00:17:18.913 14.253 - 14.309: 93.7387% ( 3) 00:17:18.913 14.309 - 14.421: 93.8813% ( 11) 00:17:18.913 14.421 - 14.533: 94.1276% ( 19) 00:17:18.913 14.533 - 14.645: 94.3998% ( 21) 00:17:18.913 14.645 - 14.756: 94.6202% ( 17) 00:17:18.913 14.756 - 14.868: 94.8535% ( 18) 00:17:18.913 14.868 - 14.980: 95.0480% ( 15) 00:17:18.913 14.980 - 15.092: 95.3202% ( 21) 00:17:18.913 15.092 - 15.203: 95.5795% ( 20) 00:17:18.913 15.203 - 15.315: 95.6961% ( 9) 00:17:18.913 15.315 - 15.427: 95.8776% ( 14) 00:17:18.913 15.427 - 15.539: 95.9684% ( 7) 00:17:18.913 15.539 - 15.651: 96.1369% ( 13) 00:17:18.913 15.651 - 15.762: 96.2406% ( 8) 00:17:18.913 15.762 - 15.874: 96.3313% ( 7) 00:17:18.913 15.874 - 15.986: 96.4221% ( 7) 00:17:18.913 15.986 - 16.098: 96.5258% ( 8) 00:17:18.913 16.098 - 16.210: 96.6814% ( 12) 00:17:18.913 16.210 - 16.321: 96.7980% ( 9) 00:17:18.913 16.321 - 16.433: 96.9277% ( 10) 00:17:18.913 16.433 - 16.545: 97.1740% ( 19) 00:17:18.913 16.545 - 16.657: 97.2518% ( 6) 00:17:18.914 16.657 - 16.769: 97.3425% ( 7) 00:17:18.914 16.769 - 16.880: 97.4203% ( 6) 00:17:18.914 16.880 - 16.992: 97.5499% ( 10) 00:17:18.914 16.992 - 17.104: 97.6277% ( 6) 00:17:18.914 17.104 - 17.216: 97.7055% ( 6) 00:17:18.914 17.216 - 17.328: 97.7444% ( 3) 00:17:18.914 17.328 - 17.439: 97.8351% ( 7) 00:17:18.914 17.439 - 17.551: 97.9129% ( 6) 00:17:18.914 17.551 - 17.663: 97.9777% ( 5) 00:17:18.914 17.663 - 17.775: 98.0296% ( 4) 00:17:18.914 17.775 - 17.886: 98.1073% ( 6) 00:17:18.914 17.886 - 17.998: 98.1333% ( 2) 00:17:18.914 17.998 - 18.110: 98.1592% ( 2) 00:17:18.914 18.110 - 18.222: 98.1981% ( 3) 00:17:18.914 18.222 - 18.334: 98.2110% ( 1) 00:17:18.914 18.334 - 18.445: 98.2499% ( 3) 00:17:18.914 18.445 - 18.557: 98.2888% ( 3) 00:17:18.914 18.557 - 18.669: 98.3277% ( 3) 00:17:18.914 18.669 - 18.781: 98.3666% ( 3) 00:17:18.914 18.781 - 18.893: 98.4185% ( 4) 00:17:18.914 18.893 - 19.004: 98.4574% ( 3) 00:17:18.914 19.004 - 19.116: 98.5092% ( 4) 00:17:18.914 19.116 - 19.228: 98.5481% ( 3) 00:17:18.914 19.228 - 19.340: 98.5611% ( 1) 00:17:18.914 19.340 - 19.452: 98.5870% ( 2) 00:17:18.914 19.452 - 19.563: 98.6129% ( 2) 00:17:18.914 19.675 - 19.787: 98.6388% ( 2) 00:17:18.914 19.899 - 20.010: 98.6648% ( 2) 00:17:18.914 20.122 - 20.234: 98.6777% ( 1) 00:17:18.914 20.234 - 20.346: 98.6907% ( 1) 00:17:18.914 20.569 - 20.681: 98.7037% ( 1) 00:17:18.914 20.681 - 20.793: 98.7166% ( 1) 00:17:18.914 21.017 - 21.128: 98.7296% ( 1) 00:17:18.914 21.240 - 21.352: 98.7425% ( 1) 00:17:18.914 21.687 - 21.799: 98.7814% ( 3) 00:17:18.914 21.799 - 21.911: 98.7944% ( 1) 00:17:18.914 21.911 - 22.023: 98.8203% ( 2) 00:17:18.914 22.023 - 22.134: 98.8722% ( 4) 00:17:18.914 22.134 - 22.246: 98.8981% ( 2) 00:17:18.914 22.246 - 22.358: 98.9240% ( 2) 00:17:18.914 22.358 - 22.470: 99.0018% ( 6) 00:17:18.914 22.470 - 22.582: 99.0277% ( 2) 00:17:18.914 22.582 - 22.693: 99.1185% ( 7) 00:17:18.914 22.693 - 22.805: 99.1574% ( 3) 00:17:18.914 22.805 - 22.917: 99.2481% ( 7) 00:17:18.914 22.917 - 23.029: 99.2740% ( 2) 00:17:18.914 23.029 - 23.141: 99.3129% ( 3) 00:17:18.914 23.141 - 23.252: 99.4037% ( 7) 00:17:18.914 23.252 - 23.364: 99.4426% ( 3) 00:17:18.914 23.364 - 23.476: 99.4944% ( 4) 00:17:18.914 23.476 - 23.588: 99.5592% ( 5) 00:17:18.914 23.588 - 23.700: 99.5852% ( 2) 00:17:18.914 23.700 - 23.811: 99.5981% ( 1) 00:17:18.914 23.811 - 23.923: 99.6111% ( 1) 00:17:18.914 23.923 - 24.035: 99.6370% ( 2) 00:17:18.914 24.035 - 24.147: 99.6630% ( 2) 00:17:18.914 24.147 - 24.259: 99.6759% ( 1) 00:17:18.914 24.706 - 24.817: 99.6889% ( 1) 00:17:18.914 24.929 - 25.041: 99.7018% ( 1) 00:17:18.914 25.041 - 25.153: 99.7278% ( 2) 00:17:18.914 25.153 - 25.265: 99.7407% ( 1) 00:17:18.914 25.376 - 25.488: 99.7537% ( 1) 00:17:18.914 25.712 - 25.824: 99.7667% ( 1) 00:17:18.914 26.271 - 26.383: 99.7796% ( 1) 00:17:18.914 26.383 - 26.494: 99.7926% ( 1) 00:17:18.914 26.606 - 26.718: 99.8055% ( 1) 00:17:18.914 27.165 - 27.277: 99.8185% ( 1) 00:17:18.914 27.277 - 27.389: 99.8315% ( 1) 00:17:18.914 27.389 - 27.500: 99.8574% ( 2) 00:17:18.914 27.612 - 27.724: 99.8833% ( 2) 00:17:18.914 27.836 - 27.948: 99.8963% ( 1) 00:17:18.914 27.948 - 28.059: 99.9093% ( 1) 00:17:18.914 28.059 - 28.171: 99.9222% ( 1) 00:17:18.914 28.395 - 28.507: 99.9352% ( 1) 00:17:18.914 28.507 - 28.618: 99.9481% ( 1) 00:17:18.914 29.513 - 29.736: 99.9611% ( 1) 00:17:18.914 29.736 - 29.960: 99.9741% ( 1) 00:17:18.914 36.220 - 36.444: 99.9870% ( 1) 00:17:18.914 67.074 - 67.521: 100.0000% ( 1) 00:17:18.914 00:17:18.914 Complete histogram 00:17:18.914 ================== 00:17:18.914 Range in us Cumulative Count 00:17:18.914 5.729 - 5.757: 0.0648% ( 5) 00:17:18.914 5.757 - 5.785: 0.4408% ( 29) 00:17:18.914 5.785 - 5.813: 1.0760% ( 49) 00:17:18.914 5.813 - 5.841: 2.4242% ( 104) 00:17:18.914 5.841 - 5.869: 4.3168% ( 146) 00:17:18.914 5.869 - 5.897: 6.6114% ( 177) 00:17:18.914 5.897 - 5.925: 8.7892% ( 168) 00:17:18.914 5.925 - 5.953: 10.8245% ( 157) 00:17:18.914 5.953 - 5.981: 12.7949% ( 152) 00:17:18.914 5.981 - 6.009: 14.6617% ( 144) 00:17:18.914 6.009 - 6.037: 17.0080% ( 181) 00:17:18.914 6.037 - 6.065: 19.0952% ( 161) 00:17:18.914 6.065 - 6.093: 20.7285% ( 126) 00:17:18.914 6.093 - 6.121: 22.2712% ( 119) 00:17:18.914 6.121 - 6.148: 23.4509% ( 91) 00:17:18.914 6.148 - 6.176: 24.6954% ( 96) 00:17:18.914 6.176 - 6.204: 25.9917% ( 100) 00:17:18.914 6.204 - 6.232: 27.2232% ( 95) 00:17:18.914 6.232 - 6.260: 28.4159% ( 92) 00:17:18.914 6.260 - 6.288: 29.6215% ( 93) 00:17:18.914 6.288 - 6.316: 31.0863% ( 113) 00:17:18.914 6.316 - 6.344: 32.5382% ( 112) 00:17:18.914 6.344 - 6.372: 33.9383% ( 108) 00:17:18.914 6.372 - 6.400: 35.7532% ( 140) 00:17:18.914 6.400 - 6.428: 37.5292% ( 137) 00:17:18.914 6.428 - 6.456: 39.7718% ( 173) 00:17:18.914 6.456 - 6.484: 42.4423% ( 206) 00:17:18.914 6.484 - 6.512: 45.6702% ( 249) 00:17:18.914 6.512 - 6.540: 48.8203% ( 243) 00:17:18.914 6.540 - 6.568: 51.8019% ( 230) 00:17:18.914 6.568 - 6.596: 54.2390% ( 188) 00:17:18.914 6.596 - 6.624: 56.5206% ( 176) 00:17:18.914 6.624 - 6.652: 58.4651% ( 150) 00:17:18.914 6.652 - 6.679: 60.1115% ( 127) 00:17:18.914 6.679 - 6.707: 61.7190% ( 124) 00:17:18.914 6.707 - 6.735: 63.3653% ( 127) 00:17:18.914 6.735 - 6.763: 64.6746% ( 101) 00:17:18.914 6.763 - 6.791: 66.1006% ( 110) 00:17:18.914 6.791 - 6.819: 67.3321% ( 95) 00:17:18.914 6.819 - 6.847: 68.3173% ( 76) 00:17:18.914 6.847 - 6.875: 69.2766% ( 74) 00:17:18.914 6.875 - 6.903: 70.1582% ( 68) 00:17:18.914 6.903 - 6.931: 70.8841% ( 56) 00:17:18.914 6.931 - 6.959: 71.4286% ( 42) 00:17:18.914 6.959 - 6.987: 71.9082% ( 37) 00:17:18.914 6.987 - 7.015: 72.6990% ( 61) 00:17:18.914 7.015 - 7.043: 73.5416% ( 65) 00:17:18.914 7.043 - 7.071: 74.2416% ( 54) 00:17:18.914 7.071 - 7.099: 74.8380% ( 46) 00:17:18.914 7.099 - 7.127: 75.6417% ( 62) 00:17:18.914 7.127 - 7.155: 76.4843% ( 65) 00:17:18.914 7.155 - 7.210: 78.1307% ( 127) 00:17:18.914 7.210 - 7.266: 79.9456% ( 140) 00:17:18.914 7.266 - 7.322: 81.6697% ( 133) 00:17:18.914 7.322 - 7.378: 83.0049% ( 103) 00:17:18.914 7.378 - 7.434: 84.3272% ( 102) 00:17:18.914 7.434 - 7.490: 85.0143% ( 53) 00:17:18.914 7.490 - 7.546: 85.8698% ( 66) 00:17:18.914 7.546 - 7.602: 86.3625% ( 38) 00:17:18.914 7.602 - 7.658: 86.9069% ( 42) 00:17:18.914 7.658 - 7.714: 87.2569% ( 27) 00:17:18.914 7.714 - 7.769: 87.5810% ( 25) 00:17:18.914 7.769 - 7.825: 87.8921% ( 24) 00:17:18.914 7.825 - 7.881: 88.1773% ( 22) 00:17:18.914 7.881 - 7.937: 88.5922% ( 32) 00:17:18.914 7.937 - 7.993: 88.9551% ( 28) 00:17:18.914 7.993 - 8.049: 89.3440% ( 30) 00:17:18.914 8.049 - 8.105: 90.2385% ( 69) 00:17:18.914 8.105 - 8.161: 91.5219% ( 99) 00:17:18.914 8.161 - 8.217: 92.9997% ( 114) 00:17:18.914 8.217 - 8.272: 93.9461% ( 73) 00:17:18.914 8.272 - 8.328: 94.4646% ( 40) 00:17:18.914 8.328 - 8.384: 94.9054% ( 34) 00:17:18.914 8.384 - 8.440: 95.0998% ( 15) 00:17:18.914 8.440 - 8.496: 95.2424% ( 11) 00:17:18.914 8.496 - 8.552: 95.3980% ( 12) 00:17:18.914 8.552 - 8.608: 95.4887% ( 7) 00:17:18.914 8.608 - 8.664: 95.5924% ( 8) 00:17:18.914 8.664 - 8.720: 95.6184% ( 2) 00:17:18.914 8.720 - 8.776: 95.6313% ( 1) 00:17:18.914 8.776 - 8.831: 95.6832% ( 4) 00:17:18.914 8.831 - 8.887: 95.7221% ( 3) 00:17:18.914 8.887 - 8.943: 95.7739% ( 4) 00:17:18.914 8.943 - 8.999: 95.8258% ( 4) 00:17:18.914 8.999 - 9.055: 95.9165% ( 7) 00:17:18.915 9.055 - 9.111: 96.2276% ( 24) 00:17:18.915 9.111 - 9.167: 96.5258% ( 23) 00:17:18.915 9.167 - 9.223: 96.7202% ( 15) 00:17:18.915 9.223 - 9.279: 96.9406% ( 17) 00:17:18.915 9.279 - 9.334: 97.0703% ( 10) 00:17:18.915 9.334 - 9.390: 97.1999% ( 10) 00:17:18.915 9.390 - 9.446: 97.2647% ( 5) 00:17:18.915 9.446 - 9.502: 97.3036% ( 3) 00:17:18.915 9.502 - 9.558: 97.3555% ( 4) 00:17:18.915 9.558 - 9.614: 97.3943% ( 3) 00:17:18.915 9.614 - 9.670: 97.4203% ( 2) 00:17:18.915 9.670 - 9.726: 97.4721% ( 4) 00:17:18.915 9.726 - 9.782: 97.4981% ( 2) 00:17:18.915 9.782 - 9.838: 97.5240% ( 2) 00:17:18.915 9.893 - 9.949: 97.5369% ( 1) 00:17:18.915 10.005 - 10.061: 97.5499% ( 1) 00:17:18.915 10.117 - 10.173: 97.5629% ( 1) 00:17:18.915 10.229 - 10.285: 97.5758% ( 1) 00:17:18.915 10.341 - 10.397: 97.6147% ( 3) 00:17:18.915 10.397 - 10.452: 97.6536% ( 3) 00:17:18.915 10.452 - 10.508: 97.6795% ( 2) 00:17:18.915 10.564 - 10.620: 97.7314% ( 4) 00:17:18.915 10.676 - 10.732: 97.7703% ( 3) 00:17:18.915 10.732 - 10.788: 97.8092% ( 3) 00:17:18.915 10.788 - 10.844: 97.8221% ( 1) 00:17:18.915 10.844 - 10.900: 97.8740% ( 4) 00:17:18.915 10.900 - 10.955: 97.9388% ( 5) 00:17:18.915 10.955 - 11.011: 97.9647% ( 2) 00:17:18.915 11.011 - 11.067: 98.0036% ( 3) 00:17:18.915 11.067 - 11.123: 98.0166% ( 1) 00:17:18.915 11.123 - 11.179: 98.0684% ( 4) 00:17:18.915 11.179 - 11.235: 98.0944% ( 2) 00:17:18.915 11.235 - 11.291: 98.1203% ( 2) 00:17:18.915 11.291 - 11.347: 98.1592% ( 3) 00:17:18.915 11.347 - 11.403: 98.1981% ( 3) 00:17:18.915 11.403 - 11.459: 98.2240% ( 2) 00:17:18.915 11.459 - 11.514: 98.2370% ( 1) 00:17:18.915 11.514 - 11.570: 98.2629% ( 2) 00:17:18.915 11.570 - 11.626: 98.2759% ( 1) 00:17:18.915 11.626 - 11.682: 98.2888% ( 1) 00:17:18.915 11.682 - 11.738: 98.3277% ( 3) 00:17:18.915 11.738 - 11.794: 98.3536% ( 2) 00:17:18.915 11.794 - 11.850: 98.3666% ( 1) 00:17:18.915 11.850 - 11.906: 98.3925% ( 2) 00:17:18.915 12.017 - 12.073: 98.4055% ( 1) 00:17:18.915 12.073 - 12.129: 98.4314% ( 2) 00:17:18.915 12.129 - 12.185: 98.4444% ( 1) 00:17:18.915 12.297 - 12.353: 98.4833% ( 3) 00:17:18.915 12.353 - 12.409: 98.4962% ( 1) 00:17:18.915 12.409 - 12.465: 98.5092% ( 1) 00:17:18.915 12.465 - 12.521: 98.5351% ( 2) 00:17:18.915 12.744 - 12.800: 98.5611% ( 2) 00:17:18.915 12.856 - 12.912: 98.5740% ( 1) 00:17:18.915 12.912 - 12.968: 98.5870% ( 1) 00:17:18.915 12.968 - 13.024: 98.6129% ( 2) 00:17:18.915 13.024 - 13.079: 98.6259% ( 1) 00:17:18.915 13.191 - 13.247: 98.6388% ( 1) 00:17:18.915 13.806 - 13.862: 98.6518% ( 1) 00:17:18.915 14.086 - 14.141: 98.6648% ( 1) 00:17:18.915 14.309 - 14.421: 98.6777% ( 1) 00:17:18.915 14.421 - 14.533: 98.7166% ( 3) 00:17:18.915 15.092 - 15.203: 98.7296% ( 1) 00:17:18.915 15.203 - 15.315: 98.7425% ( 1) 00:17:18.915 15.315 - 15.427: 98.7555% ( 1) 00:17:18.915 16.545 - 16.657: 98.7685% ( 1) 00:17:18.915 16.657 - 16.769: 98.7814% ( 1) 00:17:18.915 16.769 - 16.880: 98.8333% ( 4) 00:17:18.915 16.880 - 16.992: 98.8722% ( 3) 00:17:18.915 16.992 - 17.104: 98.9240% ( 4) 00:17:18.915 17.104 - 17.216: 98.9759% ( 4) 00:17:18.915 17.216 - 17.328: 99.0148% ( 3) 00:17:18.915 17.328 - 17.439: 99.1185% ( 8) 00:17:18.915 17.439 - 17.551: 99.1833% ( 5) 00:17:18.915 17.551 - 17.663: 99.2352% ( 4) 00:17:18.915 17.663 - 17.775: 99.2740% ( 3) 00:17:18.915 17.775 - 17.886: 99.3129% ( 3) 00:17:18.915 17.886 - 17.998: 99.3518% ( 3) 00:17:18.915 17.998 - 18.110: 99.3778% ( 2) 00:17:18.915 18.110 - 18.222: 99.4166% ( 3) 00:17:18.915 18.222 - 18.334: 99.4426% ( 2) 00:17:18.915 18.334 - 18.445: 99.4555% ( 1) 00:17:18.915 18.445 - 18.557: 99.4815% ( 2) 00:17:18.915 18.557 - 18.669: 99.5074% ( 2) 00:17:18.915 18.669 - 18.781: 99.5333% ( 2) 00:17:18.915 18.781 - 18.893: 99.5592% ( 2) 00:17:18.915 18.893 - 19.004: 99.5722% ( 1) 00:17:18.915 19.004 - 19.116: 99.6241% ( 4) 00:17:18.915 19.228 - 19.340: 99.6370% ( 1) 00:17:18.915 19.452 - 19.563: 99.6630% ( 2) 00:17:18.915 19.563 - 19.675: 99.6759% ( 1) 00:17:18.915 19.675 - 19.787: 99.6889% ( 1) 00:17:18.915 19.899 - 20.010: 99.7018% ( 1) 00:17:18.915 20.010 - 20.122: 99.7148% ( 1) 00:17:18.915 20.793 - 20.905: 99.7278% ( 1) 00:17:18.915 22.023 - 22.134: 99.7407% ( 1) 00:17:18.915 22.134 - 22.246: 99.7537% ( 1) 00:17:18.915 22.470 - 22.582: 99.7796% ( 2) 00:17:18.915 22.582 - 22.693: 99.7926% ( 1) 00:17:18.915 23.029 - 23.141: 99.8315% ( 3) 00:17:18.915 23.141 - 23.252: 99.8574% ( 2) 00:17:18.915 23.252 - 23.364: 99.9093% ( 4) 00:17:18.915 23.364 - 23.476: 99.9222% ( 1) 00:17:18.915 23.700 - 23.811: 99.9352% ( 1) 00:17:18.915 23.923 - 24.035: 99.9481% ( 1) 00:17:18.915 24.035 - 24.147: 99.9611% ( 1) 00:17:18.915 24.370 - 24.482: 99.9741% ( 1) 00:17:18.915 24.817 - 24.929: 99.9870% ( 1) 00:17:18.915 25.935 - 26.047: 100.0000% ( 1) 00:17:18.915 00:17:18.915 00:17:18.915 real 0m1.235s 00:17:18.915 user 0m1.091s 00:17:18.915 sys 0m0.100s 00:17:18.915 09:31:19 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:18.915 09:31:19 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 ************************************ 00:17:18.915 END TEST nvme_overhead 00:17:18.915 ************************************ 00:17:18.915 09:31:19 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:17:18.915 09:31:19 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:17:18.915 09:31:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:18.915 09:31:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:18.915 ************************************ 00:17:18.915 START TEST nvme_arbitration 00:17:18.915 ************************************ 00:17:18.915 09:31:19 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:17:22.209 Initializing NVMe Controllers 00:17:22.209 Attached to 0000:00:10.0 00:17:22.209 Attached to 0000:00:11.0 00:17:22.209 Attached to 0000:00:13.0 00:17:22.209 Attached to 0000:00:12.0 00:17:22.209 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:17:22.209 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:17:22.209 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:17:22.209 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:17:22.209 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:17:22.209 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:17:22.209 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:17:22.209 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:17:22.209 Initialization complete. Launching workers. 00:17:22.209 Starting thread on core 1 with urgent priority queue 00:17:22.209 Starting thread on core 2 with urgent priority queue 00:17:22.209 Starting thread on core 3 with urgent priority queue 00:17:22.209 Starting thread on core 0 with urgent priority queue 00:17:22.209 QEMU NVMe Ctrl (12340 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:17:22.209 QEMU NVMe Ctrl (12342 ) core 0: 512.00 IO/s 195.31 secs/100000 ios 00:17:22.210 QEMU NVMe Ctrl (12341 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:17:22.210 QEMU NVMe Ctrl (12342 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:17:22.210 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:17:22.210 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:17:22.210 ======================================================== 00:17:22.210 00:17:22.210 00:17:22.210 real 0m3.432s 00:17:22.210 user 0m9.557s 00:17:22.210 sys 0m0.141s 00:17:22.210 09:31:22 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.210 09:31:22 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:17:22.210 ************************************ 00:17:22.210 END TEST nvme_arbitration 00:17:22.210 ************************************ 00:17:22.210 09:31:22 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:17:22.210 09:31:22 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:22.210 09:31:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.210 09:31:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.210 ************************************ 00:17:22.210 START TEST nvme_single_aen 00:17:22.210 ************************************ 00:17:22.210 09:31:22 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:17:22.470 Asynchronous Event Request test 00:17:22.470 Attached to 0000:00:10.0 00:17:22.470 Attached to 0000:00:11.0 00:17:22.470 Attached to 0000:00:13.0 00:17:22.470 Attached to 0000:00:12.0 00:17:22.470 Reset controller to setup AER completions for this process 00:17:22.470 Registering asynchronous event callbacks... 00:17:22.470 Getting orig temperature thresholds of all controllers 00:17:22.470 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:22.470 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:22.470 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:22.470 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:17:22.470 Setting all controllers temperature threshold low to trigger AER 00:17:22.470 Waiting for all controllers temperature threshold to be set lower 00:17:22.470 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:22.470 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:17:22.470 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:22.470 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:17:22.470 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:22.470 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:17:22.470 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:17:22.470 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:17:22.470 Waiting for all controllers to trigger AER and reset threshold 00:17:22.470 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:17:22.470 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:17:22.470 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:17:22.470 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:17:22.470 Cleaning up... 00:17:22.470 00:17:22.470 real 0m0.253s 00:17:22.470 user 0m0.095s 00:17:22.470 sys 0m0.114s 00:17:22.470 09:31:22 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:22.470 09:31:22 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:17:22.470 ************************************ 00:17:22.470 END TEST nvme_single_aen 00:17:22.470 ************************************ 00:17:22.470 09:31:22 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:17:22.470 09:31:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:22.470 09:31:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:22.470 09:31:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.470 ************************************ 00:17:22.470 START TEST nvme_doorbell_aers 00:17:22.470 ************************************ 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:17:22.470 09:31:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:22.471 09:31:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:17:22.471 09:31:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:17:22.471 09:31:23 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:17:22.471 09:31:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:17:22.471 09:31:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:22.736 [2024-07-25 09:31:23.297445] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:17:32.721 Executing: test_write_invalid_db 00:17:32.721 Waiting for AER completion... 00:17:32.721 Failure: test_write_invalid_db 00:17:32.721 00:17:32.721 Executing: test_invalid_db_write_overflow_sq 00:17:32.721 Waiting for AER completion... 00:17:32.721 Failure: test_invalid_db_write_overflow_sq 00:17:32.721 00:17:32.721 Executing: test_invalid_db_write_overflow_cq 00:17:32.721 Waiting for AER completion... 00:17:32.721 Failure: test_invalid_db_write_overflow_cq 00:17:32.721 00:17:32.721 09:31:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:17:32.721 09:31:33 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:17:32.721 [2024-07-25 09:31:33.329838] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:17:42.711 Executing: test_write_invalid_db 00:17:42.711 Waiting for AER completion... 00:17:42.711 Failure: test_write_invalid_db 00:17:42.711 00:17:42.711 Executing: test_invalid_db_write_overflow_sq 00:17:42.711 Waiting for AER completion... 00:17:42.711 Failure: test_invalid_db_write_overflow_sq 00:17:42.711 00:17:42.711 Executing: test_invalid_db_write_overflow_cq 00:17:42.711 Waiting for AER completion... 00:17:42.711 Failure: test_invalid_db_write_overflow_cq 00:17:42.711 00:17:42.711 09:31:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:17:42.711 09:31:43 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:17:42.968 [2024-07-25 09:31:43.389121] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:17:52.966 Executing: test_write_invalid_db 00:17:52.966 Waiting for AER completion... 00:17:52.966 Failure: test_write_invalid_db 00:17:52.966 00:17:52.966 Executing: test_invalid_db_write_overflow_sq 00:17:52.966 Waiting for AER completion... 00:17:52.966 Failure: test_invalid_db_write_overflow_sq 00:17:52.966 00:17:52.966 Executing: test_invalid_db_write_overflow_cq 00:17:52.966 Waiting for AER completion... 00:17:52.966 Failure: test_invalid_db_write_overflow_cq 00:17:52.966 00:17:52.966 09:31:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:17:52.966 09:31:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:17:52.966 [2024-07-25 09:31:53.445353] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 Executing: test_write_invalid_db 00:18:02.956 Waiting for AER completion... 00:18:02.956 Failure: test_write_invalid_db 00:18:02.956 00:18:02.956 Executing: test_invalid_db_write_overflow_sq 00:18:02.956 Waiting for AER completion... 00:18:02.956 Failure: test_invalid_db_write_overflow_sq 00:18:02.956 00:18:02.956 Executing: test_invalid_db_write_overflow_cq 00:18:02.956 Waiting for AER completion... 00:18:02.956 Failure: test_invalid_db_write_overflow_cq 00:18:02.956 00:18:02.956 00:18:02.956 real 0m40.263s 00:18:02.956 user 0m35.832s 00:18:02.956 sys 0m4.121s 00:18:02.956 09:32:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.956 09:32:03 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:18:02.956 ************************************ 00:18:02.956 END TEST nvme_doorbell_aers 00:18:02.956 ************************************ 00:18:02.956 09:32:03 nvme -- nvme/nvme.sh@97 -- # uname 00:18:02.956 09:32:03 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:18:02.956 09:32:03 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:18:02.956 09:32:03 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:18:02.956 09:32:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.956 09:32:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.956 ************************************ 00:18:02.956 START TEST nvme_multi_aen 00:18:02.956 ************************************ 00:18:02.956 09:32:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:18:02.956 [2024-07-25 09:32:03.493429] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.493565] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.493583] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.495197] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.495258] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.495274] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.496513] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.496547] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.496568] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.956 [2024-07-25 09:32:03.497591] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.957 [2024-07-25 09:32:03.497626] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.957 [2024-07-25 09:32:03.497635] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 68954) is not found. Dropping the request. 00:18:02.957 Child process pid: 69470 00:18:03.215 [Child] Asynchronous Event Request test 00:18:03.215 [Child] Attached to 0000:00:10.0 00:18:03.215 [Child] Attached to 0000:00:11.0 00:18:03.215 [Child] Attached to 0000:00:13.0 00:18:03.215 [Child] Attached to 0000:00:12.0 00:18:03.215 [Child] Registering asynchronous event callbacks... 00:18:03.215 [Child] Getting orig temperature thresholds of all controllers 00:18:03.215 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.215 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.215 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.215 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.216 [Child] Waiting for all controllers to trigger AER and reset threshold 00:18:03.216 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 [Child] Cleaning up... 00:18:03.216 Asynchronous Event Request test 00:18:03.216 Attached to 0000:00:10.0 00:18:03.216 Attached to 0000:00:11.0 00:18:03.216 Attached to 0000:00:13.0 00:18:03.216 Attached to 0000:00:12.0 00:18:03.216 Reset controller to setup AER completions for this process 00:18:03.216 Registering asynchronous event callbacks... 00:18:03.216 Getting orig temperature thresholds of all controllers 00:18:03.216 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.216 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.216 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.216 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:18:03.216 Setting all controllers temperature threshold low to trigger AER 00:18:03.216 Waiting for all controllers temperature threshold to be set lower 00:18:03.216 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:18:03.216 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:18:03.216 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:18:03.216 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:18:03.216 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:18:03.216 Waiting for all controllers to trigger AER and reset threshold 00:18:03.216 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:18:03.216 Cleaning up... 00:18:03.216 00:18:03.216 real 0m0.486s 00:18:03.216 user 0m0.169s 00:18:03.216 sys 0m0.224s 00:18:03.216 09:32:03 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.216 09:32:03 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:18:03.216 ************************************ 00:18:03.216 END TEST nvme_multi_aen 00:18:03.216 ************************************ 00:18:03.216 09:32:03 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:18:03.216 09:32:03 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:03.216 09:32:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.216 09:32:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.216 ************************************ 00:18:03.216 START TEST nvme_startup 00:18:03.216 ************************************ 00:18:03.216 09:32:03 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:18:03.476 Initializing NVMe Controllers 00:18:03.476 Attached to 0000:00:10.0 00:18:03.476 Attached to 0000:00:11.0 00:18:03.476 Attached to 0000:00:13.0 00:18:03.476 Attached to 0000:00:12.0 00:18:03.476 Initialization complete. 00:18:03.476 Time used:128100.820 (us). 00:18:03.476 00:18:03.476 real 0m0.208s 00:18:03.476 user 0m0.064s 00:18:03.476 sys 0m0.100s 00:18:03.476 09:32:04 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:03.476 09:32:04 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:18:03.476 ************************************ 00:18:03.476 END TEST nvme_startup 00:18:03.476 ************************************ 00:18:03.476 09:32:04 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:18:03.476 09:32:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:03.476 09:32:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:03.476 09:32:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:03.476 ************************************ 00:18:03.476 START TEST nvme_multi_secondary 00:18:03.476 ************************************ 00:18:03.476 09:32:04 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:18:03.476 09:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:18:03.476 09:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=69526 00:18:03.476 09:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=69527 00:18:03.476 09:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:18:03.476 09:32:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:18:07.664 Initializing NVMe Controllers 00:18:07.664 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:07.664 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:18:07.664 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:18:07.664 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:18:07.664 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:18:07.664 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:18:07.664 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:18:07.664 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:18:07.664 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:18:07.664 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:18:07.664 Initialization complete. Launching workers. 00:18:07.664 ======================================================== 00:18:07.664 Latency(us) 00:18:07.664 Device Information : IOPS MiB/s Average min max 00:18:07.664 PCIE (0000:00:10.0) NSID 1 from core 2: 3171.32 12.39 5043.43 1587.67 13662.98 00:18:07.664 PCIE (0000:00:11.0) NSID 1 from core 2: 3171.32 12.39 5045.10 1441.43 13584.20 00:18:07.664 PCIE (0000:00:13.0) NSID 1 from core 2: 3171.32 12.39 5045.00 1343.30 13845.81 00:18:07.664 PCIE (0000:00:12.0) NSID 1 from core 2: 3171.32 12.39 5044.49 1592.27 13617.89 00:18:07.664 PCIE (0000:00:12.0) NSID 2 from core 2: 3171.32 12.39 5045.00 1577.65 13604.96 00:18:07.664 PCIE (0000:00:12.0) NSID 3 from core 2: 3171.32 12.39 5045.11 1428.49 13618.57 00:18:07.664 ======================================================== 00:18:07.664 Total : 19027.94 74.33 5044.69 1343.30 13845.81 00:18:07.664 00:18:07.664 09:32:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 69526 00:18:07.664 Initializing NVMe Controllers 00:18:07.664 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:07.664 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:18:07.664 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:18:07.664 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:18:07.664 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:18:07.664 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:18:07.664 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:18:07.664 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:18:07.664 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:18:07.664 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:18:07.664 Initialization complete. Launching workers. 00:18:07.664 ======================================================== 00:18:07.664 Latency(us) 00:18:07.664 Device Information : IOPS MiB/s Average min max 00:18:07.664 PCIE (0000:00:10.0) NSID 1 from core 1: 5803.96 22.67 2754.38 1577.84 7258.06 00:18:07.664 PCIE (0000:00:11.0) NSID 1 from core 1: 5803.96 22.67 2756.11 1541.26 7052.28 00:18:07.664 PCIE (0000:00:13.0) NSID 1 from core 1: 5803.96 22.67 2756.17 1476.26 8471.12 00:18:07.664 PCIE (0000:00:12.0) NSID 1 from core 1: 5803.96 22.67 2756.27 1619.85 8157.88 00:18:07.665 PCIE (0000:00:12.0) NSID 2 from core 1: 5803.96 22.67 2756.35 1608.48 7621.18 00:18:07.665 PCIE (0000:00:12.0) NSID 3 from core 1: 5803.96 22.67 2756.55 1622.23 7139.97 00:18:07.665 ======================================================== 00:18:07.665 Total : 34823.76 136.03 2755.97 1476.26 8471.12 00:18:07.665 00:18:09.041 Initializing NVMe Controllers 00:18:09.041 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:09.041 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:18:09.041 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:18:09.041 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:18:09.041 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:09.041 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:18:09.041 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:18:09.041 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:18:09.041 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:18:09.041 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:18:09.041 Initialization complete. Launching workers. 00:18:09.041 ======================================================== 00:18:09.041 Latency(us) 00:18:09.041 Device Information : IOPS MiB/s Average min max 00:18:09.041 PCIE (0000:00:10.0) NSID 1 from core 0: 9499.87 37.11 1682.66 834.94 7979.64 00:18:09.041 PCIE (0000:00:11.0) NSID 1 from core 0: 9499.87 37.11 1683.76 845.47 7773.04 00:18:09.041 PCIE (0000:00:13.0) NSID 1 from core 0: 9499.87 37.11 1683.73 816.29 7537.26 00:18:09.041 PCIE (0000:00:12.0) NSID 1 from core 0: 9499.87 37.11 1683.71 798.70 7473.55 00:18:09.041 PCIE (0000:00:12.0) NSID 2 from core 0: 9499.87 37.11 1683.69 756.34 8153.98 00:18:09.041 PCIE (0000:00:12.0) NSID 3 from core 0: 9503.07 37.12 1683.11 746.16 8018.49 00:18:09.041 ======================================================== 00:18:09.041 Total : 57002.40 222.67 1683.44 746.16 8153.98 00:18:09.041 00:18:09.041 09:32:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 69527 00:18:09.041 09:32:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=69595 00:18:09.041 09:32:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:18:09.041 09:32:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:18:09.041 09:32:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=69596 00:18:09.041 09:32:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:18:12.327 Initializing NVMe Controllers 00:18:12.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:12.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:18:12.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:18:12.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:18:12.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:18:12.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:18:12.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:18:12.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:18:12.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:18:12.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:18:12.327 Initialization complete. Launching workers. 00:18:12.327 ======================================================== 00:18:12.327 Latency(us) 00:18:12.327 Device Information : IOPS MiB/s Average min max 00:18:12.327 PCIE (0000:00:10.0) NSID 1 from core 1: 6232.09 24.34 2565.23 853.76 7432.88 00:18:12.327 PCIE (0000:00:11.0) NSID 1 from core 1: 6232.09 24.34 2566.84 862.68 6687.41 00:18:12.327 PCIE (0000:00:13.0) NSID 1 from core 1: 6232.09 24.34 2566.90 886.42 6675.43 00:18:12.327 PCIE (0000:00:12.0) NSID 1 from core 1: 6232.09 24.34 2567.04 896.43 6958.29 00:18:12.327 PCIE (0000:00:12.0) NSID 2 from core 1: 6232.09 24.34 2567.10 871.59 7118.58 00:18:12.327 PCIE (0000:00:12.0) NSID 3 from core 1: 6237.42 24.36 2565.14 881.23 6647.83 00:18:12.327 ======================================================== 00:18:12.327 Total : 37397.89 146.09 2566.38 853.76 7432.88 00:18:12.327 00:18:12.327 Initializing NVMe Controllers 00:18:12.327 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:12.327 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:18:12.327 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:18:12.327 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:18:12.327 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:12.327 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:18:12.327 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:18:12.327 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:18:12.327 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:18:12.327 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:18:12.327 Initialization complete. Launching workers. 00:18:12.327 ======================================================== 00:18:12.327 Latency(us) 00:18:12.327 Device Information : IOPS MiB/s Average min max 00:18:12.327 PCIE (0000:00:10.0) NSID 1 from core 0: 5881.60 22.98 2717.64 872.58 7275.16 00:18:12.327 PCIE (0000:00:11.0) NSID 1 from core 0: 5881.60 22.98 2718.92 891.26 7156.08 00:18:12.327 PCIE (0000:00:13.0) NSID 1 from core 0: 5881.60 22.98 2718.51 908.81 7363.09 00:18:12.327 PCIE (0000:00:12.0) NSID 1 from core 0: 5881.60 22.98 2718.26 899.34 7718.08 00:18:12.327 PCIE (0000:00:12.0) NSID 2 from core 0: 5881.60 22.98 2717.92 895.14 7560.86 00:18:12.327 PCIE (0000:00:12.0) NSID 3 from core 0: 5881.60 22.98 2717.60 884.61 7363.47 00:18:12.327 ======================================================== 00:18:12.327 Total : 35289.60 137.85 2718.14 872.58 7718.08 00:18:12.327 00:18:14.238 Initializing NVMe Controllers 00:18:14.238 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:14.238 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:18:14.238 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:18:14.238 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:18:14.238 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:18:14.238 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:18:14.238 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:18:14.238 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:18:14.238 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:18:14.238 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:18:14.238 Initialization complete. Launching workers. 00:18:14.238 ======================================================== 00:18:14.238 Latency(us) 00:18:14.238 Device Information : IOPS MiB/s Average min max 00:18:14.238 PCIE (0000:00:10.0) NSID 1 from core 2: 3435.55 13.42 4655.68 1142.67 10812.74 00:18:14.238 PCIE (0000:00:11.0) NSID 1 from core 2: 3435.55 13.42 4656.86 1186.07 11549.16 00:18:14.238 PCIE (0000:00:13.0) NSID 1 from core 2: 3435.35 13.42 4657.03 1268.90 12630.95 00:18:14.238 PCIE (0000:00:12.0) NSID 1 from core 2: 3435.55 13.42 4656.65 1229.50 12249.58 00:18:14.238 PCIE (0000:00:12.0) NSID 2 from core 2: 3435.55 13.42 4656.55 1082.47 12598.79 00:18:14.238 PCIE (0000:00:12.0) NSID 3 from core 2: 3435.55 13.42 4656.21 1005.32 12690.40 00:18:14.238 ======================================================== 00:18:14.238 Total : 20613.11 80.52 4656.50 1005.32 12690.40 00:18:14.238 00:18:14.498 09:32:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 69595 00:18:14.498 ************************************ 00:18:14.498 END TEST nvme_multi_secondary 00:18:14.498 ************************************ 00:18:14.498 09:32:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 69596 00:18:14.498 00:18:14.498 real 0m10.812s 00:18:14.498 user 0m18.456s 00:18:14.498 sys 0m0.879s 00:18:14.498 09:32:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.498 09:32:14 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:18:14.498 09:32:14 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:18:14.498 09:32:14 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:18:14.498 09:32:14 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/68546 ]] 00:18:14.498 09:32:14 nvme -- common/autotest_common.sh@1090 -- # kill 68546 00:18:14.498 09:32:14 nvme -- common/autotest_common.sh@1091 -- # wait 68546 00:18:14.498 [2024-07-25 09:32:14.943260] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.498 [2024-07-25 09:32:14.944449] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.498 [2024-07-25 09:32:14.944482] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.498 [2024-07-25 09:32:14.944495] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.498 [2024-07-25 09:32:14.946637] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.946698] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.946710] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.946739] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.949157] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.949190] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.949200] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.949210] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.951864] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.951907] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.951921] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.499 [2024-07-25 09:32:14.951935] nvme_pcie_common.c: 294:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 69469) is not found. Dropping the request. 00:18:14.760 09:32:15 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:18:14.760 09:32:15 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:18:14.760 09:32:15 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:18:14.760 09:32:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:14.760 09:32:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.760 09:32:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:14.760 ************************************ 00:18:14.760 START TEST bdev_nvme_reset_stuck_adm_cmd 00:18:14.760 ************************************ 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:18:14.760 * Looking for test storage... 00:18:14.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:14.760 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69750 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69750 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 69750 ']' 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:15.020 09:32:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:15.020 [2024-07-25 09:32:15.525631] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:15.020 [2024-07-25 09:32:15.525858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69750 ] 00:18:15.278 [2024-07-25 09:32:15.700181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.536 [2024-07-25 09:32:15.928796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.536 [2024-07-25 09:32:15.928979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.536 [2024-07-25 09:32:15.929113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.536 [2024-07-25 09:32:15.929175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.474 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:16.474 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:18:16.474 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:18:16.474 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.474 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:16.474 nvme0n1 00:18:16.474 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_AQqJ5.txt 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:16.475 true 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721899936 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69773 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:16.475 09:32:16 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:18.376 [2024-07-25 09:32:18.932052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:18.376 [2024-07-25 09:32:18.932411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:18.376 [2024-07-25 09:32:18.932473] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:18.376 [2024-07-25 09:32:18.932522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:18.376 [2024-07-25 09:32:18.934287] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69773 00:18:18.376 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69773 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69773 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.376 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:18.635 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.635 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:18:18.635 09:32:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_AQqJ5.txt 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_AQqJ5.txt 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69750 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 69750 ']' 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 69750 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69750 00:18:18.635 killing process with pid 69750 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69750' 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 69750 00:18:18.635 09:32:19 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 69750 00:18:21.167 09:32:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:18:21.167 09:32:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:18:21.167 00:18:21.167 real 0m6.343s 00:18:21.167 user 0m21.771s 00:18:21.167 sys 0m0.678s 00:18:21.167 ************************************ 00:18:21.167 END TEST bdev_nvme_reset_stuck_adm_cmd 00:18:21.167 ************************************ 00:18:21.167 09:32:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:21.167 09:32:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:18:21.167 09:32:21 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:18:21.167 09:32:21 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:18:21.167 09:32:21 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:21.167 09:32:21 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:21.167 09:32:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:21.167 ************************************ 00:18:21.167 START TEST nvme_fio 00:18:21.167 ************************************ 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:18:21.167 09:32:21 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:21.167 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:18:21.427 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:21.427 09:32:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:18:21.686 09:32:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:18:21.686 09:32:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:18:21.686 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:21.687 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:21.687 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:21.687 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:21.687 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:18:21.687 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:21.687 09:32:22 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:18:21.945 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:21.945 fio-3.35 00:18:21.945 Starting 1 thread 00:18:27.300 00:18:27.300 test: (groupid=0, jobs=1): err= 0: pid=69932: Thu Jul 25 09:32:27 2024 00:18:27.300 read: IOPS=22.4k, BW=87.6MiB/s (91.9MB/s)(175MiB/2001msec) 00:18:27.300 slat (nsec): min=4582, max=62540, avg=5394.44, stdev=1254.53 00:18:27.300 clat (usec): min=220, max=11127, avg=2844.99, stdev=397.31 00:18:27.300 lat (usec): min=226, max=11176, avg=2850.39, stdev=397.99 00:18:27.300 clat percentiles (usec): 00:18:27.300 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:18:27.300 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:18:27.300 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2966], 00:18:27.300 | 99.00th=[ 4359], 99.50th=[ 6063], 99.90th=[ 8455], 99.95th=[ 8455], 00:18:27.300 | 99.99th=[10683] 00:18:27.300 bw ( KiB/s): min=84680, max=91672, per=99.34%, avg=89112.00, stdev=3853.61, samples=3 00:18:27.300 iops : min=21170, max=22918, avg=22278.00, stdev=963.40, samples=3 00:18:27.300 write: IOPS=22.3k, BW=87.0MiB/s (91.3MB/s)(174MiB/2001msec); 0 zone resets 00:18:27.300 slat (nsec): min=4737, max=65221, avg=5575.34, stdev=1225.95 00:18:27.300 clat (usec): min=270, max=10882, avg=2849.45, stdev=393.99 00:18:27.300 lat (usec): min=275, max=10905, avg=2855.03, stdev=394.65 00:18:27.300 clat percentiles (usec): 00:18:27.300 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:18:27.300 | 30.00th=[ 2769], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:18:27.300 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2999], 00:18:27.300 | 99.00th=[ 4359], 99.50th=[ 6063], 99.90th=[ 8356], 99.95th=[ 8455], 00:18:27.300 | 99.99th=[10421] 00:18:27.300 bw ( KiB/s): min=84560, max=92112, per=100.00%, avg=89306.67, stdev=4133.37, samples=3 00:18:27.300 iops : min=21140, max=23028, avg=22326.67, stdev=1033.34, samples=3 00:18:27.300 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:18:27.300 lat (msec) : 2=0.05%, 4=98.78%, 10=1.11%, 20=0.02% 00:18:27.300 cpu : usr=99.20%, sys=0.10%, ctx=23, majf=0, minf=606 00:18:27.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:27.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.300 issued rwts: total=44873,44589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.300 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.300 00:18:27.300 Run status group 0 (all jobs): 00:18:27.300 READ: bw=87.6MiB/s (91.9MB/s), 87.6MiB/s-87.6MiB/s (91.9MB/s-91.9MB/s), io=175MiB (184MB), run=2001-2001msec 00:18:27.300 WRITE: bw=87.0MiB/s (91.3MB/s), 87.0MiB/s-87.0MiB/s (91.3MB/s-91.3MB/s), io=174MiB (183MB), run=2001-2001msec 00:18:27.560 ----------------------------------------------------- 00:18:27.560 Suppressions used: 00:18:27.560 count bytes template 00:18:27.560 1 32 /usr/src/fio/parse.c 00:18:27.560 1 8 libtcmalloc_minimal.so 00:18:27.560 ----------------------------------------------------- 00:18:27.560 00:18:27.560 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:18:27.560 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:18:27.560 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:18:27.560 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:18:27.819 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:18:27.819 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:18:28.079 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:18:28.079 09:32:28 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:28.079 09:32:28 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:18:28.337 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:28.337 fio-3.35 00:18:28.337 Starting 1 thread 00:18:34.904 00:18:34.904 test: (groupid=0, jobs=1): err= 0: pid=69997: Thu Jul 25 09:32:34 2024 00:18:34.904 read: IOPS=22.7k, BW=88.7MiB/s (93.1MB/s)(178MiB/2001msec) 00:18:34.904 slat (nsec): min=4571, max=96936, avg=5316.97, stdev=1590.61 00:18:34.904 clat (usec): min=214, max=11338, avg=2806.56, stdev=488.88 00:18:34.904 lat (usec): min=219, max=11410, avg=2811.88, stdev=489.84 00:18:34.904 clat percentiles (usec): 00:18:34.904 | 1.00th=[ 2573], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:18:34.904 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:18:34.904 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 2933], 00:18:34.904 | 99.00th=[ 5473], 99.50th=[ 7111], 99.90th=[ 7963], 99.95th=[ 8225], 00:18:34.904 | 99.99th=[10945] 00:18:34.904 bw ( KiB/s): min=88816, max=90155, per=98.43%, avg=89451.67, stdev=672.06, samples=3 00:18:34.904 iops : min=22204, max=22538, avg=22362.67, stdev=167.62, samples=3 00:18:34.904 write: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(177MiB/2001msec); 0 zone resets 00:18:34.904 slat (nsec): min=4492, max=71083, avg=5466.27, stdev=1435.87 00:18:34.905 clat (usec): min=248, max=11082, avg=2813.27, stdev=492.48 00:18:34.905 lat (usec): min=254, max=11105, avg=2818.73, stdev=493.38 00:18:34.905 clat percentiles (usec): 00:18:34.905 | 1.00th=[ 2573], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:18:34.905 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:18:34.905 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2868], 95.00th=[ 2933], 00:18:34.905 | 99.00th=[ 5473], 99.50th=[ 7111], 99.90th=[ 7963], 99.95th=[ 8455], 00:18:34.905 | 99.99th=[10552] 00:18:34.905 bw ( KiB/s): min=88224, max=90576, per=99.23%, avg=89630.33, stdev=1241.83, samples=3 00:18:34.905 iops : min=22056, max=22644, avg=22407.33, stdev=310.32, samples=3 00:18:34.905 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:18:34.905 lat (msec) : 2=0.05%, 4=98.12%, 10=1.76%, 20=0.02% 00:18:34.905 cpu : usr=99.25%, sys=0.10%, ctx=27, majf=0, minf=606 00:18:34.905 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:34.905 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.905 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:34.905 issued rwts: total=45462,45186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.905 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:34.905 00:18:34.905 Run status group 0 (all jobs): 00:18:34.905 READ: bw=88.7MiB/s (93.1MB/s), 88.7MiB/s-88.7MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec 00:18:34.905 WRITE: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=177MiB (185MB), run=2001-2001msec 00:18:34.905 ----------------------------------------------------- 00:18:34.905 Suppressions used: 00:18:34.905 count bytes template 00:18:34.905 1 32 /usr/src/fio/parse.c 00:18:34.905 1 8 libtcmalloc_minimal.so 00:18:34.905 ----------------------------------------------------- 00:18:34.905 00:18:34.905 09:32:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:18:34.905 09:32:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:18:34.905 09:32:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:18:34.905 09:32:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:18:34.905 09:32:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:18:34.905 09:32:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:18:34.905 09:32:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:18:34.905 09:32:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:34.905 09:32:35 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:18:35.165 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:35.165 fio-3.35 00:18:35.165 Starting 1 thread 00:18:43.288 00:18:43.288 test: (groupid=0, jobs=1): err= 0: pid=70059: Thu Jul 25 09:32:42 2024 00:18:43.288 read: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec) 00:18:43.288 slat (nsec): min=4354, max=60733, avg=5502.03, stdev=1803.31 00:18:43.288 clat (usec): min=209, max=10841, avg=2924.33, stdev=774.67 00:18:43.288 lat (usec): min=214, max=10846, avg=2929.84, stdev=775.97 00:18:43.288 clat percentiles (usec): 00:18:43.288 | 1.00th=[ 2212], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:18:43.288 | 30.00th=[ 2737], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:18:43.288 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2999], 95.00th=[ 3720], 00:18:43.288 | 99.00th=[ 7504], 99.50th=[ 8094], 99.90th=[10028], 99.95th=[10159], 00:18:43.288 | 99.99th=[10552] 00:18:43.288 bw ( KiB/s): min=74000, max=93072, per=97.19%, avg=84773.33, stdev=9773.86, samples=3 00:18:43.288 iops : min=18500, max=23268, avg=21193.33, stdev=2443.46, samples=3 00:18:43.288 write: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(169MiB/2001msec); 0 zone resets 00:18:43.288 slat (nsec): min=4510, max=59002, avg=5704.99, stdev=1868.35 00:18:43.288 clat (usec): min=245, max=10997, avg=2934.13, stdev=787.14 00:18:43.288 lat (usec): min=250, max=11003, avg=2939.84, stdev=788.46 00:18:43.288 clat percentiles (usec): 00:18:43.288 | 1.00th=[ 2212], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2704], 00:18:43.288 | 30.00th=[ 2737], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:18:43.288 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2999], 95.00th=[ 3752], 00:18:43.288 | 99.00th=[ 7504], 99.50th=[ 8094], 99.90th=[10159], 99.95th=[10290], 00:18:43.288 | 99.99th=[10683] 00:18:43.288 bw ( KiB/s): min=74000, max=93256, per=97.99%, avg=84880.00, stdev=9869.19, samples=3 00:18:43.288 iops : min=18500, max=23314, avg=21220.00, stdev=2467.30, samples=3 00:18:43.288 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:18:43.288 lat (msec) : 2=0.64%, 4=95.17%, 10=4.04%, 20=0.12% 00:18:43.288 cpu : usr=99.25%, sys=0.05%, ctx=2, majf=0, minf=606 00:18:43.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:43.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:43.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:43.288 issued rwts: total=43634,43331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:43.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:43.288 00:18:43.288 Run status group 0 (all jobs): 00:18:43.288 READ: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:18:43.288 WRITE: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=169MiB (177MB), run=2001-2001msec 00:18:43.288 ----------------------------------------------------- 00:18:43.288 Suppressions used: 00:18:43.288 count bytes template 00:18:43.288 1 32 /usr/src/fio/parse.c 00:18:43.288 1 8 libtcmalloc_minimal.so 00:18:43.288 ----------------------------------------------------- 00:18:43.288 00:18:43.288 09:32:42 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:18:43.288 09:32:42 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:18:43.288 09:32:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:18:43.288 09:32:42 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:18:43.288 09:32:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:18:43.288 09:32:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:18:43.288 09:32:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:18:43.288 09:32:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:43.288 09:32:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:18:43.288 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:43.288 fio-3.35 00:18:43.288 Starting 1 thread 00:18:55.505 00:18:55.505 test: (groupid=0, jobs=1): err= 0: pid=70136: Thu Jul 25 09:32:54 2024 00:18:55.505 read: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(182MiB/2001msec) 00:18:55.505 slat (nsec): min=4612, max=45446, avg=5220.71, stdev=971.38 00:18:55.505 clat (usec): min=214, max=11501, avg=2745.10, stdev=266.60 00:18:55.505 lat (usec): min=219, max=11546, avg=2750.32, stdev=267.03 00:18:55.505 clat percentiles (usec): 00:18:55.505 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:18:55.505 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:18:55.505 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2900], 00:18:55.505 | 99.00th=[ 3064], 99.50th=[ 4047], 99.90th=[ 6128], 99.95th=[ 8225], 00:18:55.505 | 99.99th=[11207] 00:18:55.505 bw ( KiB/s): min=90920, max=93872, per=99.60%, avg=92514.67, stdev=1490.24, samples=3 00:18:55.505 iops : min=22730, max=23468, avg=23128.67, stdev=372.56, samples=3 00:18:55.505 write: IOPS=23.1k, BW=90.1MiB/s (94.5MB/s)(180MiB/2001msec); 0 zone resets 00:18:55.505 slat (nsec): min=4745, max=39399, avg=5388.57, stdev=960.25 00:18:55.505 clat (usec): min=223, max=11300, avg=2749.50, stdev=274.73 00:18:55.505 lat (usec): min=229, max=11323, avg=2754.89, stdev=275.11 00:18:55.505 clat percentiles (usec): 00:18:55.506 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:18:55.506 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:18:55.506 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2900], 00:18:55.506 | 99.00th=[ 3064], 99.50th=[ 4113], 99.90th=[ 6390], 99.95th=[ 8586], 00:18:55.506 | 99.99th=[10814] 00:18:55.506 bw ( KiB/s): min=90368, max=94280, per=100.00%, avg=92602.67, stdev=2014.67, samples=3 00:18:55.506 iops : min=22592, max=23570, avg=23150.67, stdev=503.67, samples=3 00:18:55.506 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:18:55.506 lat (msec) : 2=0.35%, 4=99.04%, 10=0.55%, 20=0.02% 00:18:55.506 cpu : usr=99.30%, sys=0.15%, ctx=4, majf=0, minf=604 00:18:55.506 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:55.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.506 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.506 issued rwts: total=46466,46180,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.506 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.506 00:18:55.506 Run status group 0 (all jobs): 00:18:55.506 READ: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=182MiB (190MB), run=2001-2001msec 00:18:55.506 WRITE: bw=90.1MiB/s (94.5MB/s), 90.1MiB/s-90.1MiB/s (94.5MB/s-94.5MB/s), io=180MiB (189MB), run=2001-2001msec 00:18:55.506 ----------------------------------------------------- 00:18:55.506 Suppressions used: 00:18:55.506 count bytes template 00:18:55.506 1 32 /usr/src/fio/parse.c 00:18:55.506 1 8 libtcmalloc_minimal.so 00:18:55.506 ----------------------------------------------------- 00:18:55.506 00:18:55.506 09:32:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:18:55.506 09:32:55 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:18:55.506 00:18:55.506 real 0m33.388s 00:18:55.506 user 0m21.046s 00:18:55.506 sys 0m21.391s 00:18:55.506 09:32:55 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.506 09:32:55 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:18:55.506 ************************************ 00:18:55.506 END TEST nvme_fio 00:18:55.506 ************************************ 00:18:55.506 00:18:55.506 real 1m46.644s 00:18:55.506 user 3m52.408s 00:18:55.506 sys 0m31.951s 00:18:55.506 09:32:55 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:55.506 09:32:55 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:55.506 ************************************ 00:18:55.506 END TEST nvme 00:18:55.506 ************************************ 00:18:55.506 09:32:55 -- spdk/autotest.sh@221 -- # [[ 0 -eq 1 ]] 00:18:55.506 09:32:55 -- spdk/autotest.sh@225 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:18:55.506 09:32:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:18:55.506 09:32:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:55.506 09:32:55 -- common/autotest_common.sh@10 -- # set +x 00:18:55.506 ************************************ 00:18:55.506 START TEST nvme_scc 00:18:55.506 ************************************ 00:18:55.506 09:32:55 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:18:55.506 * Looking for test storage... 00:18:55.506 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:55.506 09:32:55 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:55.506 09:32:55 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.506 09:32:55 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.506 09:32:55 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.506 09:32:55 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.506 09:32:55 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.506 09:32:55 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.506 09:32:55 nvme_scc -- paths/export.sh@5 -- # export PATH 00:18:55.506 09:32:55 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:18:55.506 09:32:55 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:18:55.506 09:32:55 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:55.506 09:32:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:18:55.506 09:32:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:18:55.506 09:32:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:18:55.506 09:32:55 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:55.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:55.506 Waiting for block devices as requested 00:18:55.506 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:55.765 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:55.765 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:18:55.765 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:01.050 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:01.050 09:33:01 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:19:01.050 09:33:01 nvme_scc -- scripts/common.sh@15 -- # local i 00:19:01.050 09:33:01 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:01.050 09:33:01 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:01.050 09:33:01 nvme_scc -- scripts/common.sh@24 -- # return 0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:19:01.050 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.051 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:19:01.052 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:19:01.053 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:01.054 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:19:01.055 09:33:01 nvme_scc -- scripts/common.sh@15 -- # local i 00:19:01.055 09:33:01 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:01.055 09:33:01 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:01.055 09:33:01 nvme_scc -- scripts/common.sh@24 -- # return 0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:19:01.055 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.056 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.057 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:19:01.058 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:19:01.059 09:33:01 nvme_scc -- scripts/common.sh@15 -- # local i 00:19:01.059 09:33:01 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:19:01.059 09:33:01 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:01.059 09:33:01 nvme_scc -- scripts/common.sh@24 -- # return 0 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.059 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.060 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.061 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.062 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:19:01.063 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:19:01.064 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:01.065 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.066 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:01.067 09:33:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:19:01.329 09:33:01 nvme_scc -- scripts/common.sh@15 -- # local i 00:19:01.329 09:33:01 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:19:01.329 09:33:01 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:01.329 09:33:01 nvme_scc -- scripts/common.sh@24 -- # return 0 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:19:01.329 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:19:01.330 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:19:01.331 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:19:01.332 09:33:01 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:19:01.333 09:33:01 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:19:01.333 09:33:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:19:01.333 09:33:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:19:01.333 09:33:01 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:01.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:02.471 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:02.471 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:02.730 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:02.730 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:02.730 09:33:03 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:19:02.730 09:33:03 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:02.730 09:33:03 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.730 09:33:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:02.730 ************************************ 00:19:02.730 START TEST nvme_simple_copy 00:19:02.730 ************************************ 00:19:02.730 09:33:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:19:02.989 Initializing NVMe Controllers 00:19:02.989 Attaching to 0000:00:10.0 00:19:02.989 Controller supports SCC. Attached to 0000:00:10.0 00:19:02.989 Namespace ID: 1 size: 6GB 00:19:02.989 Initialization complete. 00:19:02.989 00:19:02.989 Controller QEMU NVMe Ctrl (12340 ) 00:19:02.989 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:19:02.989 Namespace Block Size:4096 00:19:02.989 Writing LBAs 0 to 63 with Random Data 00:19:02.989 Copied LBAs from 0 - 63 to the Destination LBA 256 00:19:02.989 LBAs matching Written Data: 64 00:19:02.989 00:19:02.989 real 0m0.287s 00:19:02.989 user 0m0.106s 00:19:02.989 sys 0m0.081s 00:19:02.989 09:33:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.989 ************************************ 00:19:02.989 END TEST nvme_simple_copy 00:19:02.989 ************************************ 00:19:02.989 09:33:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:19:02.989 00:19:02.989 real 0m8.484s 00:19:02.989 user 0m1.363s 00:19:02.989 sys 0m2.180s 00:19:02.989 09:33:03 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.989 09:33:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:02.989 ************************************ 00:19:02.989 END TEST nvme_scc 00:19:02.989 ************************************ 00:19:03.249 09:33:03 -- spdk/autotest.sh@227 -- # [[ 0 -eq 1 ]] 00:19:03.249 09:33:03 -- spdk/autotest.sh@230 -- # [[ 0 -eq 1 ]] 00:19:03.249 09:33:03 -- spdk/autotest.sh@233 -- # [[ '' -eq 1 ]] 00:19:03.249 09:33:03 -- spdk/autotest.sh@236 -- # [[ 1 -eq 1 ]] 00:19:03.249 09:33:03 -- spdk/autotest.sh@237 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:19:03.249 09:33:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:03.249 09:33:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:03.249 09:33:03 -- common/autotest_common.sh@10 -- # set +x 00:19:03.249 ************************************ 00:19:03.249 START TEST nvme_fdp 00:19:03.249 ************************************ 00:19:03.249 09:33:03 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:19:03.249 * Looking for test storage... 00:19:03.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:03.249 09:33:03 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:03.249 09:33:03 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.249 09:33:03 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.249 09:33:03 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.249 09:33:03 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.249 09:33:03 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.249 09:33:03 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.249 09:33:03 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:19:03.249 09:33:03 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:03.249 09:33:03 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:19:03.249 09:33:03 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:03.249 09:33:03 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:03.840 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:04.099 Waiting for block devices as requested 00:19:04.099 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:04.099 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:04.359 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:04.359 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:09.644 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:09.644 09:33:09 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:19:09.644 09:33:09 nvme_fdp -- scripts/common.sh@15 -- # local i 00:19:09.644 09:33:09 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:09.644 09:33:09 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:09.644 09:33:09 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:19:09.644 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.645 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.646 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:19:09.647 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:09.648 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:09.649 09:33:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:19:09.649 09:33:10 nvme_fdp -- scripts/common.sh@15 -- # local i 00:19:09.649 09:33:10 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:09.649 09:33:10 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:09.649 09:33:10 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:19:09.649 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:19:09.650 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:19:09.651 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.652 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.653 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:19:09.654 09:33:10 nvme_fdp -- scripts/common.sh@15 -- # local i 00:19:09.654 09:33:10 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:19:09.654 09:33:10 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:09.654 09:33:10 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:19:09.654 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.655 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.656 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.657 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:09.658 09:33:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:19:09.659 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.660 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:19:09.661 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:19:09.662 09:33:10 nvme_fdp -- scripts/common.sh@15 -- # local i 00:19:09.662 09:33:10 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:19:09.662 09:33:10 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:09.662 09:33:10 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.662 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:19:09.663 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:19:09.664 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:19:09.665 09:33:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:09.665 09:33:10 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:19:09.925 09:33:10 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:19:09.925 09:33:10 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:19:09.925 09:33:10 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:10.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:11.061 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:11.061 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:11.061 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:11.319 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:11.319 09:33:11 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:19:11.319 09:33:11 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:11.319 09:33:11 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.319 09:33:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:19:11.319 ************************************ 00:19:11.319 START TEST nvme_flexible_data_placement 00:19:11.319 ************************************ 00:19:11.319 09:33:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:19:11.579 Initializing NVMe Controllers 00:19:11.579 Attaching to 0000:00:13.0 00:19:11.579 Controller supports FDP Attached to 0000:00:13.0 00:19:11.579 Namespace ID: 1 Endurance Group ID: 1 00:19:11.579 Initialization complete. 00:19:11.579 00:19:11.579 ================================== 00:19:11.579 == FDP tests for Namespace: #01 == 00:19:11.579 ================================== 00:19:11.579 00:19:11.579 Get Feature: FDP: 00:19:11.579 ================= 00:19:11.579 Enabled: Yes 00:19:11.579 FDP configuration Index: 0 00:19:11.579 00:19:11.579 FDP configurations log page 00:19:11.579 =========================== 00:19:11.579 Number of FDP configurations: 1 00:19:11.579 Version: 0 00:19:11.579 Size: 112 00:19:11.579 FDP Configuration Descriptor: 0 00:19:11.579 Descriptor Size: 96 00:19:11.579 Reclaim Group Identifier format: 2 00:19:11.579 FDP Volatile Write Cache: Not Present 00:19:11.579 FDP Configuration: Valid 00:19:11.579 Vendor Specific Size: 0 00:19:11.579 Number of Reclaim Groups: 2 00:19:11.579 Number of Recalim Unit Handles: 8 00:19:11.579 Max Placement Identifiers: 128 00:19:11.579 Number of Namespaces Suppprted: 256 00:19:11.579 Reclaim unit Nominal Size: 6000000 bytes 00:19:11.579 Estimated Reclaim Unit Time Limit: Not Reported 00:19:11.579 RUH Desc #000: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #001: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #002: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #003: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #004: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #005: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #006: RUH Type: Initially Isolated 00:19:11.579 RUH Desc #007: RUH Type: Initially Isolated 00:19:11.579 00:19:11.579 FDP reclaim unit handle usage log page 00:19:11.579 ====================================== 00:19:11.579 Number of Reclaim Unit Handles: 8 00:19:11.579 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:19:11.579 RUH Usage Desc #001: RUH Attributes: Unused 00:19:11.579 RUH Usage Desc #002: RUH Attributes: Unused 00:19:11.579 RUH Usage Desc #003: RUH Attributes: Unused 00:19:11.579 RUH Usage Desc #004: RUH Attributes: Unused 00:19:11.579 RUH Usage Desc #005: RUH Attributes: Unused 00:19:11.579 RUH Usage Desc #006: RUH Attributes: Unused 00:19:11.579 RUH Usage Desc #007: RUH Attributes: Unused 00:19:11.579 00:19:11.579 FDP statistics log page 00:19:11.579 ======================= 00:19:11.579 Host bytes with metadata written: 862646272 00:19:11.579 Media bytes with metadata written: 862806016 00:19:11.579 Media bytes erased: 0 00:19:11.579 00:19:11.579 FDP Reclaim unit handle status 00:19:11.579 ============================== 00:19:11.579 Number of RUHS descriptors: 2 00:19:11.579 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002951 00:19:11.579 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:19:11.579 00:19:11.579 FDP write on placement id: 0 success 00:19:11.579 00:19:11.579 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:19:11.579 00:19:11.579 IO mgmt send: RUH update for Placement ID: #0 Success 00:19:11.579 00:19:11.579 Get Feature: FDP Events for Placement handle: #0 00:19:11.579 ======================== 00:19:11.579 Number of FDP Events: 6 00:19:11.579 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:19:11.579 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:19:11.579 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:19:11.579 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:19:11.579 FDP Event: #4 Type: Media Reallocated Enabled: No 00:19:11.579 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:19:11.579 00:19:11.579 FDP events log page 00:19:11.579 =================== 00:19:11.579 Number of FDP events: 1 00:19:11.579 FDP Event #0: 00:19:11.579 Event Type: RU Not Written to Capacity 00:19:11.579 Placement Identifier: Valid 00:19:11.579 NSID: Valid 00:19:11.579 Location: Valid 00:19:11.579 Placement Identifier: 0 00:19:11.579 Event Timestamp: 8 00:19:11.579 Namespace Identifier: 1 00:19:11.579 Reclaim Group Identifier: 0 00:19:11.579 Reclaim Unit Handle Identifier: 0 00:19:11.579 00:19:11.579 FDP test passed 00:19:11.579 00:19:11.579 real 0m0.266s 00:19:11.579 user 0m0.084s 00:19:11.579 sys 0m0.082s 00:19:11.579 09:33:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.579 09:33:12 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:19:11.579 ************************************ 00:19:11.579 END TEST nvme_flexible_data_placement 00:19:11.579 ************************************ 00:19:11.579 00:19:11.579 real 0m8.434s 00:19:11.579 user 0m1.300s 00:19:11.579 sys 0m2.198s 00:19:11.579 09:33:12 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:11.579 09:33:12 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:19:11.579 ************************************ 00:19:11.579 END TEST nvme_fdp 00:19:11.579 ************************************ 00:19:11.579 09:33:12 -- spdk/autotest.sh@240 -- # [[ '' -eq 1 ]] 00:19:11.579 09:33:12 -- spdk/autotest.sh@244 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:11.579 09:33:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:11.579 09:33:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:11.579 09:33:12 -- common/autotest_common.sh@10 -- # set +x 00:19:11.579 ************************************ 00:19:11.579 START TEST nvme_rpc 00:19:11.579 ************************************ 00:19:11.579 09:33:12 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:11.839 * Looking for test storage... 00:19:11.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 4 == 0 )) 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=71537 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:11.839 09:33:12 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 71537 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 71537 ']' 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.839 09:33:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:12.098 [2024-07-25 09:33:12.509169] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:12.098 [2024-07-25 09:33:12.509297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71537 ] 00:19:12.098 [2024-07-25 09:33:12.656044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:12.357 [2024-07-25 09:33:12.876315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.357 [2024-07-25 09:33:12.876351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.296 09:33:13 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:13.296 09:33:13 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:19:13.296 09:33:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:13.564 Nvme0n1 00:19:13.564 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:19:13.564 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:19:13.839 request: 00:19:13.839 { 00:19:13.839 "bdev_name": "Nvme0n1", 00:19:13.839 "filename": "non_existing_file", 00:19:13.839 "method": "bdev_nvme_apply_firmware", 00:19:13.839 "req_id": 1 00:19:13.839 } 00:19:13.839 Got JSON-RPC error response 00:19:13.839 response: 00:19:13.839 { 00:19:13.839 "code": -32603, 00:19:13.839 "message": "open file failed." 00:19:13.839 } 00:19:13.839 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:19:13.839 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:19:13.839 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:13.839 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:13.839 09:33:14 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 71537 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 71537 ']' 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 71537 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71537 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:13.839 killing process with pid 71537 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71537' 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@969 -- # kill 71537 00:19:13.839 09:33:14 nvme_rpc -- common/autotest_common.sh@974 -- # wait 71537 00:19:16.382 00:19:16.382 real 0m4.555s 00:19:16.383 user 0m8.172s 00:19:16.383 sys 0m0.663s 00:19:16.383 09:33:16 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:16.383 09:33:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:16.383 ************************************ 00:19:16.383 END TEST nvme_rpc 00:19:16.383 ************************************ 00:19:16.383 09:33:16 -- spdk/autotest.sh@245 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:16.383 09:33:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:16.383 09:33:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:16.383 09:33:16 -- common/autotest_common.sh@10 -- # set +x 00:19:16.383 ************************************ 00:19:16.383 START TEST nvme_rpc_timeouts 00:19:16.383 ************************************ 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:16.383 * Looking for test storage... 00:19:16.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_71613 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_71613 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=71640 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:19:16.383 09:33:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 71640 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 71640 ']' 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:16.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:16.383 09:33:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:16.383 [2024-07-25 09:33:16.970212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:16.383 [2024-07-25 09:33:16.970335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71640 ] 00:19:16.643 [2024-07-25 09:33:17.132509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:16.902 [2024-07-25 09:33:17.349326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.902 [2024-07-25 09:33:17.349363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.841 09:33:18 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.841 09:33:18 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:19:17.841 Checking default timeout settings: 00:19:17.841 09:33:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:19:17.841 09:33:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:18.100 Making settings changes with rpc: 00:19:18.100 09:33:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:19:18.100 09:33:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:19:18.359 Check default vs. modified settings: 00:19:18.359 09:33:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:19:18.359 09:33:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:19:18.620 Setting action_on_timeout is changed as expected. 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:19:18.620 Setting timeout_us is changed as expected. 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:19:18.620 Setting timeout_admin_us is changed as expected. 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_71613 /tmp/settings_modified_71613 00:19:18.620 09:33:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 71640 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 71640 ']' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 71640 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71640 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:18.620 killing process with pid 71640 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71640' 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 71640 00:19:18.620 09:33:19 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 71640 00:19:21.159 RPC TIMEOUT SETTING TEST PASSED. 00:19:21.159 09:33:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:19:21.159 00:19:21.159 real 0m4.788s 00:19:21.159 user 0m8.850s 00:19:21.159 sys 0m0.651s 00:19:21.159 09:33:21 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:21.159 09:33:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:21.159 ************************************ 00:19:21.159 END TEST nvme_rpc_timeouts 00:19:21.159 ************************************ 00:19:21.159 09:33:21 -- spdk/autotest.sh@247 -- # uname -s 00:19:21.159 09:33:21 -- spdk/autotest.sh@247 -- # '[' Linux = Linux ']' 00:19:21.159 09:33:21 -- spdk/autotest.sh@248 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:19:21.159 09:33:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:21.159 09:33:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:21.159 09:33:21 -- common/autotest_common.sh@10 -- # set +x 00:19:21.159 ************************************ 00:19:21.159 START TEST sw_hotplug 00:19:21.159 ************************************ 00:19:21.159 09:33:21 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:19:21.159 * Looking for test storage... 00:19:21.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:21.159 09:33:21 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:21.729 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:21.989 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.989 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.989 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.989 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:21.989 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:19:21.989 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:19:21.989 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:19:21.989 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@230 -- # local class 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@15 -- # local i 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@15 -- # local i 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@15 -- # local i 00:19:21.989 09:33:22 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@15 -- # local i 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:19:21.990 09:33:22 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:19:21.990 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:19:21.990 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:19:21.990 09:33:22 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:22.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:22.819 Waiting for block devices as requested 00:19:22.819 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:22.819 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:23.078 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:19:23.078 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:19:28.365 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:19:28.365 09:33:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:19:28.365 09:33:28 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:28.623 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:19:28.883 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:28.883 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:19:29.142 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:19:29.401 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:29.401 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:29.401 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:19:29.401 09:33:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=72520 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:19:29.661 09:33:30 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:19:29.661 09:33:30 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:19:29.661 09:33:30 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:19:29.661 09:33:30 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:19:29.661 09:33:30 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:19:29.661 09:33:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:19:29.920 Initializing NVMe Controllers 00:19:29.920 Attaching to 0000:00:10.0 00:19:29.920 Attaching to 0000:00:11.0 00:19:29.920 Attached to 0000:00:10.0 00:19:29.920 Attached to 0000:00:11.0 00:19:29.920 Initialization complete. Starting I/O... 00:19:29.920 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:19:29.920 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:19:29.920 00:19:30.857 QEMU NVMe Ctrl (12340 ): 1836 I/Os completed (+1836) 00:19:30.857 QEMU NVMe Ctrl (12341 ): 1853 I/Os completed (+1853) 00:19:30.857 00:19:31.794 QEMU NVMe Ctrl (12340 ): 4420 I/Os completed (+2584) 00:19:31.794 QEMU NVMe Ctrl (12341 ): 4450 I/Os completed (+2597) 00:19:31.794 00:19:33.174 QEMU NVMe Ctrl (12340 ): 7056 I/Os completed (+2636) 00:19:33.174 QEMU NVMe Ctrl (12341 ): 7106 I/Os completed (+2656) 00:19:33.174 00:19:34.114 QEMU NVMe Ctrl (12340 ): 9780 I/Os completed (+2724) 00:19:34.114 QEMU NVMe Ctrl (12341 ): 9832 I/Os completed (+2726) 00:19:34.114 00:19:35.051 QEMU NVMe Ctrl (12340 ): 12290 I/Os completed (+2510) 00:19:35.051 QEMU NVMe Ctrl (12341 ): 12402 I/Os completed (+2570) 00:19:35.051 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:35.620 [2024-07-25 09:33:36.151613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:19:35.620 Controller removed: QEMU NVMe Ctrl (12340 ) 00:19:35.620 [2024-07-25 09:33:36.153762] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.153844] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.153871] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.153896] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:19:35.620 [2024-07-25 09:33:36.157051] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.157119] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.157140] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.157160] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:35.620 [2024-07-25 09:33:36.176935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:19:35.620 Controller removed: QEMU NVMe Ctrl (12341 ) 00:19:35.620 [2024-07-25 09:33:36.178265] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.178314] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.178346] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.178365] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:19:35.620 [2024-07-25 09:33:36.180784] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.180825] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.180846] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 [2024-07-25 09:33:36.180864] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:19:35.620 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:35.620 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:19:35.620 EAL: Scan for (pci) bus failed. 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:35.879 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:35.879 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:35.879 Attaching to 0000:00:10.0 00:19:35.879 Attached to 0000:00:10.0 00:19:36.139 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:36.139 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:36.139 09:33:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:36.139 Attaching to 0000:00:11.0 00:19:36.139 Attached to 0000:00:11.0 00:19:37.078 QEMU NVMe Ctrl (12340 ): 2478 I/Os completed (+2478) 00:19:37.078 QEMU NVMe Ctrl (12341 ): 2268 I/Os completed (+2268) 00:19:37.078 00:19:38.017 QEMU NVMe Ctrl (12340 ): 4986 I/Os completed (+2508) 00:19:38.017 QEMU NVMe Ctrl (12341 ): 4784 I/Os completed (+2516) 00:19:38.017 00:19:38.956 QEMU NVMe Ctrl (12340 ): 7526 I/Os completed (+2540) 00:19:38.956 QEMU NVMe Ctrl (12341 ): 7331 I/Os completed (+2547) 00:19:38.956 00:19:39.893 QEMU NVMe Ctrl (12340 ): 10022 I/Os completed (+2496) 00:19:39.893 QEMU NVMe Ctrl (12341 ): 9845 I/Os completed (+2514) 00:19:39.893 00:19:40.832 QEMU NVMe Ctrl (12340 ): 12670 I/Os completed (+2648) 00:19:40.832 QEMU NVMe Ctrl (12341 ): 12496 I/Os completed (+2651) 00:19:40.832 00:19:41.767 QEMU NVMe Ctrl (12340 ): 15210 I/Os completed (+2540) 00:19:41.767 QEMU NVMe Ctrl (12341 ): 15055 I/Os completed (+2559) 00:19:41.767 00:19:43.139 QEMU NVMe Ctrl (12340 ): 18441 I/Os completed (+3231) 00:19:43.139 QEMU NVMe Ctrl (12341 ): 18483 I/Os completed (+3428) 00:19:43.139 00:19:44.082 QEMU NVMe Ctrl (12340 ): 21134 I/Os completed (+2693) 00:19:44.082 QEMU NVMe Ctrl (12341 ): 21419 I/Os completed (+2936) 00:19:44.082 00:19:45.039 QEMU NVMe Ctrl (12340 ): 23742 I/Os completed (+2608) 00:19:45.039 QEMU NVMe Ctrl (12341 ): 24033 I/Os completed (+2614) 00:19:45.039 00:19:45.977 QEMU NVMe Ctrl (12340 ): 26142 I/Os completed (+2400) 00:19:45.977 QEMU NVMe Ctrl (12341 ): 26440 I/Os completed (+2407) 00:19:45.977 00:19:46.915 QEMU NVMe Ctrl (12340 ): 28514 I/Os completed (+2372) 00:19:46.915 QEMU NVMe Ctrl (12341 ): 28825 I/Os completed (+2385) 00:19:46.915 00:19:47.854 QEMU NVMe Ctrl (12340 ): 31039 I/Os completed (+2525) 00:19:47.854 QEMU NVMe Ctrl (12341 ): 31413 I/Os completed (+2588) 00:19:47.854 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:48.114 [2024-07-25 09:33:48.518963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:19:48.114 Controller removed: QEMU NVMe Ctrl (12340 ) 00:19:48.114 [2024-07-25 09:33:48.521132] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.521204] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.521243] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.521277] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:19:48.114 [2024-07-25 09:33:48.524592] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.524659] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.524683] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.524706] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:19:48.114 [2024-07-25 09:33:48.548592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:19:48.114 Controller removed: QEMU NVMe Ctrl (12341 ) 00:19:48.114 [2024-07-25 09:33:48.550701] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.550768] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.550803] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.550831] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:19:48.114 [2024-07-25 09:33:48.553841] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.553894] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.553917] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 [2024-07-25 09:33:48.553939] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:48.114 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:19:48.374 Attaching to 0000:00:10.0 00:19:48.374 Attached to 0000:00:10.0 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:19:48.374 09:33:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:19:48.374 Attaching to 0000:00:11.0 00:19:48.374 Attached to 0000:00:11.0 00:19:48.943 QEMU NVMe Ctrl (12340 ): 1417 I/Os completed (+1417) 00:19:48.943 QEMU NVMe Ctrl (12341 ): 1196 I/Os completed (+1196) 00:19:48.943 00:19:49.881 QEMU NVMe Ctrl (12340 ): 4065 I/Os completed (+2648) 00:19:49.881 QEMU NVMe Ctrl (12341 ): 3844 I/Os completed (+2648) 00:19:49.881 00:19:50.820 QEMU NVMe Ctrl (12340 ): 6729 I/Os completed (+2664) 00:19:50.820 QEMU NVMe Ctrl (12341 ): 6508 I/Os completed (+2664) 00:19:50.820 00:19:51.758 QEMU NVMe Ctrl (12340 ): 9389 I/Os completed (+2660) 00:19:51.758 QEMU NVMe Ctrl (12341 ): 9176 I/Os completed (+2668) 00:19:51.758 00:19:53.137 QEMU NVMe Ctrl (12340 ): 12057 I/Os completed (+2668) 00:19:53.137 QEMU NVMe Ctrl (12341 ): 11844 I/Os completed (+2668) 00:19:53.137 00:19:53.704 QEMU NVMe Ctrl (12340 ): 14733 I/Os completed (+2676) 00:19:53.705 QEMU NVMe Ctrl (12341 ): 14520 I/Os completed (+2676) 00:19:53.705 00:19:55.080 QEMU NVMe Ctrl (12340 ): 17453 I/Os completed (+2720) 00:19:55.080 QEMU NVMe Ctrl (12341 ): 17244 I/Os completed (+2724) 00:19:55.080 00:19:56.016 QEMU NVMe Ctrl (12340 ): 20089 I/Os completed (+2636) 00:19:56.016 QEMU NVMe Ctrl (12341 ): 19888 I/Os completed (+2644) 00:19:56.016 00:19:56.951 QEMU NVMe Ctrl (12340 ): 22789 I/Os completed (+2700) 00:19:56.951 QEMU NVMe Ctrl (12341 ): 22588 I/Os completed (+2700) 00:19:56.951 00:19:57.887 QEMU NVMe Ctrl (12340 ): 25429 I/Os completed (+2640) 00:19:57.887 QEMU NVMe Ctrl (12341 ): 25233 I/Os completed (+2645) 00:19:57.887 00:19:58.825 QEMU NVMe Ctrl (12340 ): 27973 I/Os completed (+2544) 00:19:58.825 QEMU NVMe Ctrl (12341 ): 27790 I/Os completed (+2557) 00:19:58.825 00:19:59.761 QEMU NVMe Ctrl (12340 ): 30701 I/Os completed (+2728) 00:19:59.761 QEMU NVMe Ctrl (12341 ): 30522 I/Os completed (+2732) 00:19:59.761 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:00.329 [2024-07-25 09:34:00.866149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:20:00.329 Controller removed: QEMU NVMe Ctrl (12340 ) 00:20:00.329 [2024-07-25 09:34:00.867638] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.867696] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.867717] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.867738] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:20:00.329 [2024-07-25 09:34:00.870260] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.870308] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.870326] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.870343] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:00.329 [2024-07-25 09:34:00.901994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:20:00.329 Controller removed: QEMU NVMe Ctrl (12341 ) 00:20:00.329 [2024-07-25 09:34:00.903410] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.903460] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.903483] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.903502] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:20:00.329 [2024-07-25 09:34:00.908319] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.908363] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.908384] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 [2024-07-25 09:34:00.908398] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:20:00.329 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:20:00.329 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:20:00.329 EAL: Scan for (pci) bus failed. 00:20:00.587 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:00.587 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:00.587 09:34:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:20:00.587 Attaching to 0000:00:10.0 00:20:00.587 Attached to 0000:00:10.0 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:00.587 09:34:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:20:00.845 Attaching to 0000:00:11.0 00:20:00.845 Attached to 0000:00:11.0 00:20:00.845 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:20:00.845 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:20:00.845 [2024-07-25 09:34:01.212626] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:20:13.055 09:34:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:20:13.055 09:34:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:13.055 09:34:13 sw_hotplug -- common/autotest_common.sh@717 -- # time=43.06 00:20:13.055 09:34:13 sw_hotplug -- common/autotest_common.sh@718 -- # echo 43.06 00:20:13.055 09:34:13 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:20:13.055 09:34:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.06 00:20:13.055 09:34:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.06 2 00:20:13.055 remove_attach_helper took 43.06s to complete (handling 2 nvme drive(s)) 09:34:13 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 72520 00:20:19.624 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (72520) - No such process 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 72520 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=73060 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:20:19.624 09:34:19 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 73060 00:20:19.624 09:34:19 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 73060 ']' 00:20:19.624 09:34:19 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.624 09:34:19 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:19.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.624 09:34:19 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.624 09:34:19 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:19.624 09:34:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:19.624 [2024-07-25 09:34:19.311586] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:19.624 [2024-07-25 09:34:19.311717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73060 ] 00:20:19.624 [2024-07-25 09:34:19.473567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.624 [2024-07-25 09:34:19.705381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:20:20.192 09:34:20 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:20:20.192 09:34:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:26.762 09:34:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.762 09:34:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:26.762 [2024-07-25 09:34:26.721416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:20:26.762 [2024-07-25 09:34:26.723532] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:26.723579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:26.723609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 [2024-07-25 09:34:26.723633] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:26.723648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:26.723658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 [2024-07-25 09:34:26.723671] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:26.723680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:26.723691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 [2024-07-25 09:34:26.723702] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:26.723717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:26.723726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 09:34:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:20:26.762 09:34:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:20:26.762 [2024-07-25 09:34:27.220454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:20:26.762 [2024-07-25 09:34:27.222230] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:27.222294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:27.222323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 [2024-07-25 09:34:27.222345] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:27.222354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:27.222364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 [2024-07-25 09:34:27.222373] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:27.222382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:27.222390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 [2024-07-25 09:34:27.222400] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:26.762 [2024-07-25 09:34:27.222409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.762 [2024-07-25 09:34:27.222418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:26.762 09:34:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.762 09:34:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:26.762 09:34:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:20:26.762 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:27.022 09:34:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:39.232 09:34:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.232 09:34:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:39.232 09:34:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:39.232 [2024-07-25 09:34:39.696582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:39.232 [2024-07-25 09:34:39.698645] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.232 [2024-07-25 09:34:39.698682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.232 [2024-07-25 09:34:39.698841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.232 [2024-07-25 09:34:39.698865] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.232 [2024-07-25 09:34:39.698878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.232 [2024-07-25 09:34:39.698888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.232 [2024-07-25 09:34:39.698901] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.232 [2024-07-25 09:34:39.698910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.232 [2024-07-25 09:34:39.698922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.232 [2024-07-25 09:34:39.698932] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.232 [2024-07-25 09:34:39.698944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.232 [2024-07-25 09:34:39.698953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:39.232 09:34:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.232 09:34:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:39.232 09:34:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:20:39.232 09:34:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:20:39.491 [2024-07-25 09:34:40.095817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:20:39.491 [2024-07-25 09:34:40.097683] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.491 [2024-07-25 09:34:40.097721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.491 [2024-07-25 09:34:40.097735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.491 [2024-07-25 09:34:40.097774] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.491 [2024-07-25 09:34:40.097783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.491 [2024-07-25 09:34:40.097793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.491 [2024-07-25 09:34:40.097825] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.491 [2024-07-25 09:34:40.097836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.491 [2024-07-25 09:34:40.097844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.491 [2024-07-25 09:34:40.097857] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:39.491 [2024-07-25 09:34:40.097865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:39.491 [2024-07-25 09:34:40.097875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:39.749 09:34:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.749 09:34:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:39.749 09:34:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:20:39.749 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:40.007 09:34:40 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:52.213 09:34:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.213 09:34:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:52.213 09:34:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:20:52.213 [2024-07-25 09:34:52.671760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:20:52.213 [2024-07-25 09:34:52.673821] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.213 [2024-07-25 09:34:52.673854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.213 [2024-07-25 09:34:52.673871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.213 [2024-07-25 09:34:52.673895] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.213 [2024-07-25 09:34:52.673908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.213 [2024-07-25 09:34:52.673916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.213 [2024-07-25 09:34:52.673928] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.213 [2024-07-25 09:34:52.673939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.213 [2024-07-25 09:34:52.673949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.213 [2024-07-25 09:34:52.673958] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.213 [2024-07-25 09:34:52.673986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.213 [2024-07-25 09:34:52.673995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:52.213 09:34:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.213 09:34:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:52.213 09:34:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:20:52.213 09:34:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:20:52.473 [2024-07-25 09:34:53.071009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:20:52.473 [2024-07-25 09:34:53.073312] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.473 [2024-07-25 09:34:53.073350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.473 [2024-07-25 09:34:53.073364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.473 [2024-07-25 09:34:53.073388] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.473 [2024-07-25 09:34:53.073397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.473 [2024-07-25 09:34:53.073410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.473 [2024-07-25 09:34:53.073418] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.473 [2024-07-25 09:34:53.073431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.473 [2024-07-25 09:34:53.073439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.473 [2024-07-25 09:34:53.073458] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:20:52.473 [2024-07-25 09:34:53.073467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.473 [2024-07-25 09:34:53.073479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:52.732 09:34:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:20:52.732 09:34:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:20:52.732 09:34:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:20:52.732 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:20:52.993 09:34:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.97 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.97 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.97 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.97 2 00:21:05.208 remove_attach_helper took 44.97s to complete (handling 2 nvme drive(s)) 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:21:05.208 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:21:05.208 09:35:05 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:21:05.209 09:35:05 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:21:05.209 09:35:05 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:21:05.209 09:35:05 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:21:05.209 09:35:05 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:21:05.209 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:21:05.209 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:21:05.209 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:21:05.209 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:21:05.209 09:35:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:11.781 09:35:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.781 09:35:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:11.781 [2024-07-25 09:35:11.725756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:11.781 [2024-07-25 09:35:11.727118] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:11.727150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:11.727165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 [2024-07-25 09:35:11.727186] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:11.727197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:11.727206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 [2024-07-25 09:35:11.727219] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:11.727237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:11.727250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 [2024-07-25 09:35:11.727259] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:11.727270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:11.727278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 09:35:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:21:11.781 09:35:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:21:11.781 [2024-07-25 09:35:12.124999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:11.781 [2024-07-25 09:35:12.126315] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:12.126353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:12.126366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 [2024-07-25 09:35:12.126389] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:12.126398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:12.126408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 [2024-07-25 09:35:12.126418] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:12.126428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:12.126436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 [2024-07-25 09:35:12.126446] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:11.781 [2024-07-25 09:35:12.126454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:11.781 [2024-07-25 09:35:12.126464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:11.781 09:35:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.781 09:35:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:11.781 09:35:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:21:11.781 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:12.039 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:12.040 09:35:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:24.294 09:35:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.294 09:35:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:24.294 09:35:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:21:24.294 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:21:24.294 [2024-07-25 09:35:24.700988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:24.294 [2024-07-25 09:35:24.702599] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.294 [2024-07-25 09:35:24.702639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.294 [2024-07-25 09:35:24.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.294 [2024-07-25 09:35:24.702681] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.294 [2024-07-25 09:35:24.702694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.294 [2024-07-25 09:35:24.702704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.294 [2024-07-25 09:35:24.702725] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.294 [2024-07-25 09:35:24.702736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.294 [2024-07-25 09:35:24.702765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.294 [2024-07-25 09:35:24.702775] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.295 [2024-07-25 09:35:24.702787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.295 [2024-07-25 09:35:24.702797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.295 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:24.295 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:24.295 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:24.295 09:35:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.295 09:35:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:24.295 09:35:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.295 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:21:24.295 09:35:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:21:24.555 [2024-07-25 09:35:25.100215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:24.555 [2024-07-25 09:35:25.101509] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.555 [2024-07-25 09:35:25.101549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.555 [2024-07-25 09:35:25.101566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.555 [2024-07-25 09:35:25.101592] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.555 [2024-07-25 09:35:25.101602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.555 [2024-07-25 09:35:25.101616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.555 [2024-07-25 09:35:25.101625] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.555 [2024-07-25 09:35:25.101635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.555 [2024-07-25 09:35:25.101643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.555 [2024-07-25 09:35:25.101654] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:24.555 [2024-07-25 09:35:25.101663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.555 [2024-07-25 09:35:25.101673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:24.814 09:35:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.814 09:35:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:24.814 09:35:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:24.814 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:25.081 09:35:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:37.344 09:35:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.344 09:35:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:37.344 09:35:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:21:37.344 [2024-07-25 09:35:37.676231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:21:37.344 [2024-07-25 09:35:37.677968] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.344 [2024-07-25 09:35:37.678012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.344 [2024-07-25 09:35:37.678030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.344 [2024-07-25 09:35:37.678053] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.344 [2024-07-25 09:35:37.678065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.344 [2024-07-25 09:35:37.678075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.344 [2024-07-25 09:35:37.678090] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.344 [2024-07-25 09:35:37.678099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.344 [2024-07-25 09:35:37.678115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.344 [2024-07-25 09:35:37.678125] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.344 [2024-07-25 09:35:37.678136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.344 [2024-07-25 09:35:37.678146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:37.344 09:35:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.344 09:35:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:37.344 09:35:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:21:37.344 09:35:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:21:37.604 [2024-07-25 09:35:38.075461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:21:37.604 [2024-07-25 09:35:38.077019] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.604 [2024-07-25 09:35:38.077061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.604 [2024-07-25 09:35:38.077075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.604 [2024-07-25 09:35:38.077096] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.604 [2024-07-25 09:35:38.077105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.604 [2024-07-25 09:35:38.077115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.604 [2024-07-25 09:35:38.077125] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.604 [2024-07-25 09:35:38.077134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.604 [2024-07-25 09:35:38.077142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.604 [2024-07-25 09:35:38.077154] nvme_pcie_common.c: 746:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:21:37.604 [2024-07-25 09:35:38.077162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.604 [2024-07-25 09:35:38.077174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:37.864 09:35:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.864 09:35:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:37.864 09:35:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:21:37.864 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:21:38.123 09:35:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.98 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.98 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.98 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.98 2 00:21:50.340 remove_attach_helper took 44.98s to complete (handling 2 nvme drive(s)) 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:21:50.340 09:35:50 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 73060 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 73060 ']' 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 73060 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73060 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:50.340 killing process with pid 73060 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73060' 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@969 -- # kill 73060 00:21:50.340 09:35:50 sw_hotplug -- common/autotest_common.sh@974 -- # wait 73060 00:21:52.886 09:35:53 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:53.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.714 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:53.714 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:53.714 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.714 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:21:53.714 00:21:53.714 real 2m32.698s 00:21:53.714 user 1m53.395s 00:21:53.714 sys 0m19.219s 00:21:53.714 09:35:54 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.714 09:35:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:21:53.714 ************************************ 00:21:53.714 END TEST sw_hotplug 00:21:53.714 ************************************ 00:21:53.974 09:35:54 -- spdk/autotest.sh@251 -- # [[ 1 -eq 1 ]] 00:21:53.974 09:35:54 -- spdk/autotest.sh@252 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:21:53.974 09:35:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:53.974 09:35:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.974 09:35:54 -- common/autotest_common.sh@10 -- # set +x 00:21:53.974 ************************************ 00:21:53.974 START TEST nvme_xnvme 00:21:53.974 ************************************ 00:21:53.974 09:35:54 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:21:53.974 * Looking for test storage... 00:21:53.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:21:53.974 09:35:54 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:53.974 09:35:54 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.974 09:35:54 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.974 09:35:54 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.974 09:35:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.974 09:35:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.974 09:35:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.974 09:35:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:21:53.974 09:35:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.974 09:35:54 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:21:53.974 09:35:54 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:53.974 09:35:54 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.974 09:35:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:21:53.974 ************************************ 00:21:53.974 START TEST xnvme_to_malloc_dd_copy 00:21:53.974 ************************************ 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:21:53.974 09:35:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:21:54.234 { 00:21:54.234 "subsystems": [ 00:21:54.234 { 00:21:54.234 "subsystem": "bdev", 00:21:54.234 "config": [ 00:21:54.234 { 00:21:54.234 "params": { 00:21:54.234 "block_size": 512, 00:21:54.234 "num_blocks": 2097152, 00:21:54.234 "name": "malloc0" 00:21:54.234 }, 00:21:54.234 "method": "bdev_malloc_create" 00:21:54.234 }, 00:21:54.234 { 00:21:54.234 "params": { 00:21:54.234 "io_mechanism": "libaio", 00:21:54.234 "filename": "/dev/nullb0", 00:21:54.234 "name": "null0" 00:21:54.234 }, 00:21:54.234 "method": "bdev_xnvme_create" 00:21:54.234 }, 00:21:54.234 { 00:21:54.234 "method": "bdev_wait_for_examine" 00:21:54.234 } 00:21:54.234 ] 00:21:54.234 } 00:21:54.234 ] 00:21:54.234 } 00:21:54.234 [2024-07-25 09:35:54.611962] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:54.234 [2024-07-25 09:35:54.613122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74457 ] 00:21:54.234 [2024-07-25 09:35:54.776355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.493 [2024-07-25 09:35:55.001165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.787  Copying: 279/1024 [MB] (279 MBps) Copying: 560/1024 [MB] (280 MBps) Copying: 842/1024 [MB] (282 MBps) Copying: 1024/1024 [MB] (average 282 MBps) 00:22:03.787 00:22:03.787 09:36:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:22:03.787 09:36:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:22:03.787 09:36:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:22:03.787 09:36:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:22:03.787 { 00:22:03.787 "subsystems": [ 00:22:03.787 { 00:22:03.787 "subsystem": "bdev", 00:22:03.787 "config": [ 00:22:03.787 { 00:22:03.787 "params": { 00:22:03.787 "block_size": 512, 00:22:03.787 "num_blocks": 2097152, 00:22:03.787 "name": "malloc0" 00:22:03.787 }, 00:22:03.788 "method": "bdev_malloc_create" 00:22:03.788 }, 00:22:03.788 { 00:22:03.788 "params": { 00:22:03.788 "io_mechanism": "libaio", 00:22:03.788 "filename": "/dev/nullb0", 00:22:03.788 "name": "null0" 00:22:03.788 }, 00:22:03.788 "method": "bdev_xnvme_create" 00:22:03.788 }, 00:22:03.788 { 00:22:03.788 "method": "bdev_wait_for_examine" 00:22:03.788 } 00:22:03.788 ] 00:22:03.788 } 00:22:03.788 ] 00:22:03.788 } 00:22:03.788 [2024-07-25 09:36:04.246110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:03.788 [2024-07-25 09:36:04.246219] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74566 ] 00:22:04.046 [2024-07-25 09:36:04.408558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.046 [2024-07-25 09:36:04.623058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.353  Copying: 287/1024 [MB] (287 MBps) Copying: 571/1024 [MB] (284 MBps) Copying: 857/1024 [MB] (286 MBps) Copying: 1024/1024 [MB] (average 286 MBps) 00:22:13.353 00:22:13.353 09:36:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:22:13.353 09:36:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:22:13.353 09:36:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:22:13.353 09:36:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:22:13.353 09:36:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:22:13.353 09:36:13 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:22:13.353 { 00:22:13.353 "subsystems": [ 00:22:13.353 { 00:22:13.353 "subsystem": "bdev", 00:22:13.353 "config": [ 00:22:13.353 { 00:22:13.353 "params": { 00:22:13.353 "block_size": 512, 00:22:13.353 "num_blocks": 2097152, 00:22:13.353 "name": "malloc0" 00:22:13.353 }, 00:22:13.353 "method": "bdev_malloc_create" 00:22:13.353 }, 00:22:13.353 { 00:22:13.353 "params": { 00:22:13.353 "io_mechanism": "io_uring", 00:22:13.353 "filename": "/dev/nullb0", 00:22:13.353 "name": "null0" 00:22:13.353 }, 00:22:13.353 "method": "bdev_xnvme_create" 00:22:13.353 }, 00:22:13.353 { 00:22:13.353 "method": "bdev_wait_for_examine" 00:22:13.353 } 00:22:13.353 ] 00:22:13.353 } 00:22:13.353 ] 00:22:13.353 } 00:22:13.353 [2024-07-25 09:36:13.817259] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:13.353 [2024-07-25 09:36:13.817362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74673 ] 00:22:13.612 [2024-07-25 09:36:13.975458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.612 [2024-07-25 09:36:14.191833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.911  Copying: 295/1024 [MB] (295 MBps) Copying: 587/1024 [MB] (292 MBps) Copying: 879/1024 [MB] (292 MBps) Copying: 1024/1024 [MB] (average 293 MBps) 00:22:22.911 00:22:22.911 09:36:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:22:22.911 09:36:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:22:22.911 09:36:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:22:22.911 09:36:23 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:22:22.911 { 00:22:22.911 "subsystems": [ 00:22:22.911 { 00:22:22.911 "subsystem": "bdev", 00:22:22.911 "config": [ 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "block_size": 512, 00:22:22.911 "num_blocks": 2097152, 00:22:22.911 "name": "malloc0" 00:22:22.911 }, 00:22:22.911 "method": "bdev_malloc_create" 00:22:22.911 }, 00:22:22.911 { 00:22:22.911 "params": { 00:22:22.911 "io_mechanism": "io_uring", 00:22:22.911 "filename": "/dev/nullb0", 00:22:22.911 "name": "null0" 00:22:22.911 }, 00:22:22.911 "method": "bdev_xnvme_create" 00:22:22.911 }, 00:22:22.911 { 00:22:22.911 "method": "bdev_wait_for_examine" 00:22:22.911 } 00:22:22.911 ] 00:22:22.911 } 00:22:22.911 ] 00:22:22.911 } 00:22:22.911 [2024-07-25 09:36:23.277223] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:22.911 [2024-07-25 09:36:23.277350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74788 ] 00:22:22.911 [2024-07-25 09:36:23.441170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.170 [2024-07-25 09:36:23.652919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.424  Copying: 292/1024 [MB] (292 MBps) Copying: 585/1024 [MB] (292 MBps) Copying: 878/1024 [MB] (293 MBps) Copying: 1024/1024 [MB] (average 293 MBps) 00:22:32.424 00:22:32.424 09:36:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:22:32.424 09:36:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:22:32.424 00:22:32.424 real 0m38.295s 00:22:32.424 user 0m34.439s 00:22:32.424 sys 0m3.379s 00:22:32.424 09:36:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.424 09:36:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:22:32.424 ************************************ 00:22:32.424 END TEST xnvme_to_malloc_dd_copy 00:22:32.424 ************************************ 00:22:32.424 09:36:32 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:22:32.424 09:36:32 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:32.424 09:36:32 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.424 09:36:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:32.424 ************************************ 00:22:32.424 START TEST xnvme_bdevperf 00:22:32.424 ************************************ 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:22:32.424 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:32.425 09:36:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:32.425 { 00:22:32.425 "subsystems": [ 00:22:32.425 { 00:22:32.425 "subsystem": "bdev", 00:22:32.425 "config": [ 00:22:32.425 { 00:22:32.425 "params": { 00:22:32.425 "io_mechanism": "libaio", 00:22:32.425 "filename": "/dev/nullb0", 00:22:32.425 "name": "null0" 00:22:32.425 }, 00:22:32.425 "method": "bdev_xnvme_create" 00:22:32.425 }, 00:22:32.425 { 00:22:32.425 "method": "bdev_wait_for_examine" 00:22:32.425 } 00:22:32.425 ] 00:22:32.425 } 00:22:32.425 ] 00:22:32.425 } 00:22:32.425 [2024-07-25 09:36:32.991349] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:32.425 [2024-07-25 09:36:32.991474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74915 ] 00:22:32.685 [2024-07-25 09:36:33.161129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.944 [2024-07-25 09:36:33.383745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.203 Running I/O for 5 seconds... 00:22:38.484 00:22:38.484 Latency(us) 00:22:38.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.484 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:38.484 null0 : 5.00 184152.02 719.34 0.00 0.00 345.16 126.99 490.09 00:22:38.484 =================================================================================================================== 00:22:38.484 Total : 184152.02 719.34 0.00 0.00 345.16 126.99 490.09 00:22:39.427 09:36:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:22:39.427 09:36:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:22:39.427 09:36:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:22:39.427 09:36:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:22:39.427 09:36:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:22:39.427 09:36:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:39.690 { 00:22:39.690 "subsystems": [ 00:22:39.690 { 00:22:39.690 "subsystem": "bdev", 00:22:39.690 "config": [ 00:22:39.690 { 00:22:39.690 "params": { 00:22:39.690 "io_mechanism": "io_uring", 00:22:39.690 "filename": "/dev/nullb0", 00:22:39.690 "name": "null0" 00:22:39.690 }, 00:22:39.690 "method": "bdev_xnvme_create" 00:22:39.690 }, 00:22:39.690 { 00:22:39.690 "method": "bdev_wait_for_examine" 00:22:39.690 } 00:22:39.690 ] 00:22:39.690 } 00:22:39.690 ] 00:22:39.690 } 00:22:39.690 [2024-07-25 09:36:40.119889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:39.690 [2024-07-25 09:36:40.119998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75000 ] 00:22:39.690 [2024-07-25 09:36:40.281685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.950 [2024-07-25 09:36:40.501186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.520 Running I/O for 5 seconds... 00:22:45.796 00:22:45.796 Latency(us) 00:22:45.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.796 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:22:45.796 null0 : 5.00 232922.37 909.85 0.00 0.00 272.49 163.66 370.25 00:22:45.796 =================================================================================================================== 00:22:45.796 Total : 232922.37 909.85 0.00 0.00 272.49 163.66 370.25 00:22:46.735 09:36:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:22:46.735 09:36:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:22:46.735 00:22:46.735 real 0m14.272s 00:22:46.735 user 0m11.687s 00:22:46.735 sys 0m2.396s 00:22:46.735 09:36:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.735 09:36:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:46.735 ************************************ 00:22:46.735 END TEST xnvme_bdevperf 00:22:46.735 ************************************ 00:22:46.735 ************************************ 00:22:46.735 END TEST nvme_xnvme 00:22:46.735 ************************************ 00:22:46.735 00:22:46.735 real 0m52.811s 00:22:46.735 user 0m46.214s 00:22:46.735 sys 0m5.941s 00:22:46.735 09:36:47 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.735 09:36:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:46.735 09:36:47 -- spdk/autotest.sh@253 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:46.735 09:36:47 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:46.735 09:36:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.735 09:36:47 -- common/autotest_common.sh@10 -- # set +x 00:22:46.735 ************************************ 00:22:46.735 START TEST blockdev_xnvme 00:22:46.735 ************************************ 00:22:46.735 09:36:47 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:22:46.995 * Looking for test storage... 00:22:46.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75140 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:22:46.995 09:36:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 75140 00:22:46.995 09:36:47 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 75140 ']' 00:22:46.995 09:36:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.995 09:36:47 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.995 09:36:47 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.995 09:36:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.995 09:36:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:46.995 [2024-07-25 09:36:47.484158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:46.995 [2024-07-25 09:36:47.484709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75140 ] 00:22:47.255 [2024-07-25 09:36:47.648023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.515 [2024-07-25 09:36:47.870862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.454 09:36:48 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:48.454 09:36:48 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:22:48.454 09:36:48 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:22:48.454 09:36:48 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:22:48.454 09:36:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:22:48.454 09:36:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:22:48.454 09:36:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:48.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:48.973 Waiting for block devices as requested 00:22:48.973 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.233 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.233 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:22:49.233 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:22:54.511 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:22:54.511 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1670 -- # local nvme bdf 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:22:54.511 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n2 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n2 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme2n3 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme2n3 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3c3n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3c3n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1673 -- # is_block_zoned nvme3n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1662 -- # local device=nvme3n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:22:54.512 nvme0n1 00:22:54.512 nvme1n1 00:22:54.512 nvme2n1 00:22:54.512 nvme2n2 00:22:54.512 nvme2n3 00:22:54.512 nvme3n1 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.512 09:36:54 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.512 09:36:54 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.512 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.512 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:22:54.512 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:22:54.512 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:54.512 09:36:55 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.512 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:22:54.512 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:22:54.513 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7d9692b6-b7b9-4885-8277-a6025991233c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7d9692b6-b7b9-4885-8277-a6025991233c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "973dacfd-756c-4c84-8bf9-cf88028c2b5e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "973dacfd-756c-4c84-8bf9-cf88028c2b5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "703dff37-3fab-46af-98cf-e677b8b989f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "703dff37-3fab-46af-98cf-e677b8b989f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ddbb7183-410a-4849-b583-3fa71fc7b1ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddbb7183-410a-4849-b583-3fa71fc7b1ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "16d0a7f5-9a7d-4ce6-96fd-7086ecbd3337"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "16d0a7f5-9a7d-4ce6-96fd-7086ecbd3337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "273b9751-ee01-45f3-be26-71ddafca487b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "273b9751-ee01-45f3-be26-71ddafca487b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:22:54.772 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:22:54.772 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:22:54.772 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:22:54.772 09:36:55 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 75140 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 75140 ']' 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 75140 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75140 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75140' 00:22:54.772 killing process with pid 75140 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 75140 00:22:54.772 09:36:55 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 75140 00:22:57.311 09:36:57 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:57.311 09:36:57 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:57.311 09:36:57 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:22:57.311 09:36:57 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.311 09:36:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:57.311 ************************************ 00:22:57.311 START TEST bdev_hello_world 00:22:57.311 ************************************ 00:22:57.311 09:36:57 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:22:57.311 [2024-07-25 09:36:57.632266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.311 [2024-07-25 09:36:57.632388] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75517 ] 00:22:57.311 [2024-07-25 09:36:57.791830] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.571 [2024-07-25 09:36:58.007357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.141 [2024-07-25 09:36:58.449143] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:22:58.141 [2024-07-25 09:36:58.449189] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:22:58.141 [2024-07-25 09:36:58.449204] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:22:58.141 [2024-07-25 09:36:58.450865] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:22:58.141 [2024-07-25 09:36:58.451270] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:22:58.141 [2024-07-25 09:36:58.451298] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:22:58.141 [2024-07-25 09:36:58.451474] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:22:58.141 00:22:58.141 [2024-07-25 09:36:58.451490] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:22:59.521 00:22:59.521 real 0m2.144s 00:22:59.521 user 0m1.806s 00:22:59.521 sys 0m0.225s 00:22:59.521 ************************************ 00:22:59.521 END TEST bdev_hello_world 00:22:59.521 ************************************ 00:22:59.521 09:36:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:59.521 09:36:59 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:22:59.521 09:36:59 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:22:59.521 09:36:59 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:59.521 09:36:59 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:59.521 09:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:22:59.521 ************************************ 00:22:59.521 START TEST bdev_bounds 00:22:59.521 ************************************ 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:22:59.521 Process bdevio pid: 75559 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=75559 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 75559' 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 75559 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 75559 ']' 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.521 09:36:59 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:22:59.521 [2024-07-25 09:36:59.841064] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:59.521 [2024-07-25 09:36:59.841278] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75559 ] 00:22:59.521 [2024-07-25 09:37:00.003781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:59.781 [2024-07-25 09:37:00.229496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.781 [2024-07-25 09:37:00.229627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.781 [2024-07-25 09:37:00.229679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.349 09:37:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:00.349 09:37:00 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:23:00.349 09:37:00 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:00.349 I/O targets: 00:23:00.349 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:00.349 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:23:00.349 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:00.349 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:00.349 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:00.349 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:23:00.349 00:23:00.349 00:23:00.349 CUnit - A unit testing framework for C - Version 2.1-3 00:23:00.349 http://cunit.sourceforge.net/ 00:23:00.349 00:23:00.349 00:23:00.349 Suite: bdevio tests on: nvme3n1 00:23:00.349 Test: blockdev write read block ...passed 00:23:00.349 Test: blockdev write zeroes read block ...passed 00:23:00.349 Test: blockdev write zeroes read no split ...passed 00:23:00.349 Test: blockdev write zeroes read split ...passed 00:23:00.349 Test: blockdev write zeroes read split partial ...passed 00:23:00.349 Test: blockdev reset ...passed 00:23:00.349 Test: blockdev write read 8 blocks ...passed 00:23:00.349 Test: blockdev write read size > 128k ...passed 00:23:00.349 Test: blockdev write read invalid size ...passed 00:23:00.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.349 Test: blockdev write read max offset ...passed 00:23:00.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.349 Test: blockdev writev readv 8 blocks ...passed 00:23:00.349 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.349 Test: blockdev writev readv block ...passed 00:23:00.349 Test: blockdev writev readv size > 128k ...passed 00:23:00.349 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.349 Test: blockdev comparev and writev ...passed 00:23:00.349 Test: blockdev nvme passthru rw ...passed 00:23:00.349 Test: blockdev nvme passthru vendor specific ...passed 00:23:00.349 Test: blockdev nvme admin passthru ...passed 00:23:00.349 Test: blockdev copy ...passed 00:23:00.349 Suite: bdevio tests on: nvme2n3 00:23:00.349 Test: blockdev write read block ...passed 00:23:00.349 Test: blockdev write zeroes read block ...passed 00:23:00.349 Test: blockdev write zeroes read no split ...passed 00:23:00.349 Test: blockdev write zeroes read split ...passed 00:23:00.349 Test: blockdev write zeroes read split partial ...passed 00:23:00.349 Test: blockdev reset ...passed 00:23:00.349 Test: blockdev write read 8 blocks ...passed 00:23:00.349 Test: blockdev write read size > 128k ...passed 00:23:00.349 Test: blockdev write read invalid size ...passed 00:23:00.349 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.349 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.349 Test: blockdev write read max offset ...passed 00:23:00.349 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.349 Test: blockdev writev readv 8 blocks ...passed 00:23:00.349 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.606 Test: blockdev writev readv block ...passed 00:23:00.606 Test: blockdev writev readv size > 128k ...passed 00:23:00.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.606 Test: blockdev comparev and writev ...passed 00:23:00.606 Test: blockdev nvme passthru rw ...passed 00:23:00.606 Test: blockdev nvme passthru vendor specific ...passed 00:23:00.606 Test: blockdev nvme admin passthru ...passed 00:23:00.606 Test: blockdev copy ...passed 00:23:00.606 Suite: bdevio tests on: nvme2n2 00:23:00.606 Test: blockdev write read block ...passed 00:23:00.606 Test: blockdev write zeroes read block ...passed 00:23:00.606 Test: blockdev write zeroes read no split ...passed 00:23:00.606 Test: blockdev write zeroes read split ...passed 00:23:00.606 Test: blockdev write zeroes read split partial ...passed 00:23:00.606 Test: blockdev reset ...passed 00:23:00.606 Test: blockdev write read 8 blocks ...passed 00:23:00.606 Test: blockdev write read size > 128k ...passed 00:23:00.606 Test: blockdev write read invalid size ...passed 00:23:00.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.606 Test: blockdev write read max offset ...passed 00:23:00.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.606 Test: blockdev writev readv 8 blocks ...passed 00:23:00.606 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.606 Test: blockdev writev readv block ...passed 00:23:00.606 Test: blockdev writev readv size > 128k ...passed 00:23:00.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.606 Test: blockdev comparev and writev ...passed 00:23:00.606 Test: blockdev nvme passthru rw ...passed 00:23:00.606 Test: blockdev nvme passthru vendor specific ...passed 00:23:00.606 Test: blockdev nvme admin passthru ...passed 00:23:00.606 Test: blockdev copy ...passed 00:23:00.606 Suite: bdevio tests on: nvme2n1 00:23:00.606 Test: blockdev write read block ...passed 00:23:00.606 Test: blockdev write zeroes read block ...passed 00:23:00.606 Test: blockdev write zeroes read no split ...passed 00:23:00.606 Test: blockdev write zeroes read split ...passed 00:23:00.606 Test: blockdev write zeroes read split partial ...passed 00:23:00.606 Test: blockdev reset ...passed 00:23:00.606 Test: blockdev write read 8 blocks ...passed 00:23:00.606 Test: blockdev write read size > 128k ...passed 00:23:00.606 Test: blockdev write read invalid size ...passed 00:23:00.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.606 Test: blockdev write read max offset ...passed 00:23:00.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.606 Test: blockdev writev readv 8 blocks ...passed 00:23:00.606 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.606 Test: blockdev writev readv block ...passed 00:23:00.606 Test: blockdev writev readv size > 128k ...passed 00:23:00.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.606 Test: blockdev comparev and writev ...passed 00:23:00.606 Test: blockdev nvme passthru rw ...passed 00:23:00.606 Test: blockdev nvme passthru vendor specific ...passed 00:23:00.606 Test: blockdev nvme admin passthru ...passed 00:23:00.606 Test: blockdev copy ...passed 00:23:00.606 Suite: bdevio tests on: nvme1n1 00:23:00.606 Test: blockdev write read block ...passed 00:23:00.606 Test: blockdev write zeroes read block ...passed 00:23:00.606 Test: blockdev write zeroes read no split ...passed 00:23:00.606 Test: blockdev write zeroes read split ...passed 00:23:00.606 Test: blockdev write zeroes read split partial ...passed 00:23:00.606 Test: blockdev reset ...passed 00:23:00.606 Test: blockdev write read 8 blocks ...passed 00:23:00.606 Test: blockdev write read size > 128k ...passed 00:23:00.606 Test: blockdev write read invalid size ...passed 00:23:00.606 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.606 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.606 Test: blockdev write read max offset ...passed 00:23:00.606 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.606 Test: blockdev writev readv 8 blocks ...passed 00:23:00.606 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.606 Test: blockdev writev readv block ...passed 00:23:00.606 Test: blockdev writev readv size > 128k ...passed 00:23:00.606 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.606 Test: blockdev comparev and writev ...passed 00:23:00.606 Test: blockdev nvme passthru rw ...passed 00:23:00.606 Test: blockdev nvme passthru vendor specific ...passed 00:23:00.606 Test: blockdev nvme admin passthru ...passed 00:23:00.606 Test: blockdev copy ...passed 00:23:00.606 Suite: bdevio tests on: nvme0n1 00:23:00.606 Test: blockdev write read block ...passed 00:23:00.606 Test: blockdev write zeroes read block ...passed 00:23:00.865 Test: blockdev write zeroes read no split ...passed 00:23:00.865 Test: blockdev write zeroes read split ...passed 00:23:00.865 Test: blockdev write zeroes read split partial ...passed 00:23:00.865 Test: blockdev reset ...passed 00:23:00.865 Test: blockdev write read 8 blocks ...passed 00:23:00.865 Test: blockdev write read size > 128k ...passed 00:23:00.865 Test: blockdev write read invalid size ...passed 00:23:00.865 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:00.865 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:00.865 Test: blockdev write read max offset ...passed 00:23:00.865 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:00.865 Test: blockdev writev readv 8 blocks ...passed 00:23:00.865 Test: blockdev writev readv 30 x 1block ...passed 00:23:00.865 Test: blockdev writev readv block ...passed 00:23:00.865 Test: blockdev writev readv size > 128k ...passed 00:23:00.865 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:00.865 Test: blockdev comparev and writev ...passed 00:23:00.865 Test: blockdev nvme passthru rw ...passed 00:23:00.865 Test: blockdev nvme passthru vendor specific ...passed 00:23:00.865 Test: blockdev nvme admin passthru ...passed 00:23:00.865 Test: blockdev copy ...passed 00:23:00.865 00:23:00.865 Run Summary: Type Total Ran Passed Failed Inactive 00:23:00.865 suites 6 6 n/a 0 0 00:23:00.865 tests 138 138 138 0 0 00:23:00.865 asserts 780 780 780 0 n/a 00:23:00.865 00:23:00.865 Elapsed time = 1.404 seconds 00:23:00.865 0 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 75559 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 75559 ']' 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 75559 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75559 00:23:00.865 killing process with pid 75559 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75559' 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 75559 00:23:00.865 09:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 75559 00:23:02.242 ************************************ 00:23:02.242 END TEST bdev_bounds 00:23:02.242 ************************************ 00:23:02.242 09:37:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:02.242 00:23:02.242 real 0m2.873s 00:23:02.242 user 0m6.698s 00:23:02.242 sys 0m0.356s 00:23:02.242 09:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:02.242 09:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:02.242 09:37:02 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:23:02.242 09:37:02 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:23:02.242 09:37:02 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:02.242 09:37:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:02.242 ************************************ 00:23:02.242 START TEST bdev_nbd 00:23:02.242 ************************************ 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:02.242 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=75624 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 75624 /var/tmp/spdk-nbd.sock 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 75624 ']' 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:02.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:02.243 09:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:02.243 [2024-07-25 09:37:02.785782] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:02.243 [2024-07-25 09:37:02.785983] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.501 [2024-07-25 09:37:02.948294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.768 [2024-07-25 09:37:03.169302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.345 1+0 records in 00:23:03.345 1+0 records out 00:23:03.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497988 s, 8.2 MB/s 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:03.345 09:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:03.604 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.605 1+0 records in 00:23:03.605 1+0 records out 00:23:03.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728545 s, 5.6 MB/s 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:03.605 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:03.864 1+0 records in 00:23:03.864 1+0 records out 00:23:03.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705665 s, 5.8 MB/s 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:03.864 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.123 1+0 records in 00:23:04.123 1+0 records out 00:23:04.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00143023 s, 2.9 MB/s 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:04.123 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.383 1+0 records in 00:23:04.383 1+0 records out 00:23:04.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066443 s, 6.2 MB/s 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:04.383 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:04.383 1+0 records in 00:23:04.383 1+0 records out 00:23:04.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968418 s, 4.2 MB/s 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:04.384 09:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:04.642 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:04.642 { 00:23:04.642 "nbd_device": "/dev/nbd0", 00:23:04.642 "bdev_name": "nvme0n1" 00:23:04.642 }, 00:23:04.642 { 00:23:04.642 "nbd_device": "/dev/nbd1", 00:23:04.642 "bdev_name": "nvme1n1" 00:23:04.642 }, 00:23:04.642 { 00:23:04.642 "nbd_device": "/dev/nbd2", 00:23:04.642 "bdev_name": "nvme2n1" 00:23:04.642 }, 00:23:04.642 { 00:23:04.642 "nbd_device": "/dev/nbd3", 00:23:04.642 "bdev_name": "nvme2n2" 00:23:04.642 }, 00:23:04.642 { 00:23:04.642 "nbd_device": "/dev/nbd4", 00:23:04.642 "bdev_name": "nvme2n3" 00:23:04.642 }, 00:23:04.642 { 00:23:04.642 "nbd_device": "/dev/nbd5", 00:23:04.642 "bdev_name": "nvme3n1" 00:23:04.642 } 00:23:04.642 ]' 00:23:04.642 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:04.642 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:04.642 { 00:23:04.643 "nbd_device": "/dev/nbd0", 00:23:04.643 "bdev_name": "nvme0n1" 00:23:04.643 }, 00:23:04.643 { 00:23:04.643 "nbd_device": "/dev/nbd1", 00:23:04.643 "bdev_name": "nvme1n1" 00:23:04.643 }, 00:23:04.643 { 00:23:04.643 "nbd_device": "/dev/nbd2", 00:23:04.643 "bdev_name": "nvme2n1" 00:23:04.643 }, 00:23:04.643 { 00:23:04.643 "nbd_device": "/dev/nbd3", 00:23:04.643 "bdev_name": "nvme2n2" 00:23:04.643 }, 00:23:04.643 { 00:23:04.643 "nbd_device": "/dev/nbd4", 00:23:04.643 "bdev_name": "nvme2n3" 00:23:04.643 }, 00:23:04.643 { 00:23:04.643 "nbd_device": "/dev/nbd5", 00:23:04.643 "bdev_name": "nvme3n1" 00:23:04.643 } 00:23:04.643 ]' 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.643 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:04.902 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.162 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:05.421 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:05.421 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:05.421 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:05.421 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.421 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.422 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:05.422 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:05.422 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.422 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.422 09:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:05.681 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:05.941 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:23:06.200 /dev/nbd0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:06.200 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:06.201 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:23:06.201 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:06.201 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:06.201 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:06.201 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.201 1+0 records in 00:23:06.201 1+0 records out 00:23:06.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814631 s, 5.0 MB/s 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.460 09:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:23:06.460 /dev/nbd1 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.460 1+0 records in 00:23:06.460 1+0 records out 00:23:06.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000860488 s, 4.8 MB/s 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.460 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:23:06.725 /dev/nbd10 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.725 1+0 records in 00:23:06.725 1+0 records out 00:23:06.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525003 s, 7.8 MB/s 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.725 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.726 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:23:06.988 /dev/nbd11 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:06.988 1+0 records in 00:23:06.988 1+0 records out 00:23:06.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879875 s, 4.7 MB/s 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:06.988 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:23:07.246 /dev/nbd12 00:23:07.246 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:07.246 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:07.246 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:23:07.246 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:07.247 1+0 records in 00:23:07.247 1+0 records out 00:23:07.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539811 s, 7.6 MB/s 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:07.247 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:23:07.508 /dev/nbd13 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:07.508 1+0 records in 00:23:07.508 1+0 records out 00:23:07.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00060984 s, 6.7 MB/s 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:07.508 09:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:07.508 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd0", 00:23:07.508 "bdev_name": "nvme0n1" 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd1", 00:23:07.508 "bdev_name": "nvme1n1" 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd10", 00:23:07.508 "bdev_name": "nvme2n1" 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd11", 00:23:07.508 "bdev_name": "nvme2n2" 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd12", 00:23:07.508 "bdev_name": "nvme2n3" 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd13", 00:23:07.508 "bdev_name": "nvme3n1" 00:23:07.508 } 00:23:07.508 ]' 00:23:07.508 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd0", 00:23:07.508 "bdev_name": "nvme0n1" 00:23:07.508 }, 00:23:07.508 { 00:23:07.508 "nbd_device": "/dev/nbd1", 00:23:07.508 "bdev_name": "nvme1n1" 00:23:07.508 }, 00:23:07.508 { 00:23:07.509 "nbd_device": "/dev/nbd10", 00:23:07.509 "bdev_name": "nvme2n1" 00:23:07.509 }, 00:23:07.509 { 00:23:07.509 "nbd_device": "/dev/nbd11", 00:23:07.509 "bdev_name": "nvme2n2" 00:23:07.509 }, 00:23:07.509 { 00:23:07.509 "nbd_device": "/dev/nbd12", 00:23:07.509 "bdev_name": "nvme2n3" 00:23:07.509 }, 00:23:07.509 { 00:23:07.509 "nbd_device": "/dev/nbd13", 00:23:07.509 "bdev_name": "nvme3n1" 00:23:07.509 } 00:23:07.509 ]' 00:23:07.509 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:07.767 /dev/nbd1 00:23:07.767 /dev/nbd10 00:23:07.767 /dev/nbd11 00:23:07.767 /dev/nbd12 00:23:07.767 /dev/nbd13' 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:07.767 /dev/nbd1 00:23:07.767 /dev/nbd10 00:23:07.767 /dev/nbd11 00:23:07.767 /dev/nbd12 00:23:07.767 /dev/nbd13' 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:07.767 256+0 records in 00:23:07.767 256+0 records out 00:23:07.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129899 s, 80.7 MB/s 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:07.767 256+0 records in 00:23:07.767 256+0 records out 00:23:07.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0899376 s, 11.7 MB/s 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:07.767 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:08.026 256+0 records in 00:23:08.026 256+0 records out 00:23:08.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110033 s, 9.5 MB/s 00:23:08.026 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:08.026 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:08.026 256+0 records in 00:23:08.026 256+0 records out 00:23:08.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0926733 s, 11.3 MB/s 00:23:08.026 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:08.026 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:08.026 256+0 records in 00:23:08.026 256+0 records out 00:23:08.026 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.091784 s, 11.4 MB/s 00:23:08.026 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:08.026 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:08.286 256+0 records in 00:23:08.286 256+0 records out 00:23:08.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0914519 s, 11.5 MB/s 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:08.286 256+0 records in 00:23:08.286 256+0 records out 00:23:08.286 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0903611 s, 11.6 MB/s 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.286 09:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.546 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:08.806 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.066 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:09.326 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:09.585 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.586 09:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:09.586 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:09.586 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:09.586 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:09.845 malloc_lvol_verify 00:23:09.845 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:10.105 941e911b-5ec7-4c38-a745-d2f5b8bb9a90 00:23:10.105 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:10.370 7816a524-15fd-4aad-a804-9a87f845673f 00:23:10.370 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:10.370 /dev/nbd0 00:23:10.640 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:23:10.640 mke2fs 1.46.5 (30-Dec-2021) 00:23:10.640 Discarding device blocks: 0/4096 done 00:23:10.640 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:10.640 00:23:10.640 Allocating group tables: 0/1 done 00:23:10.640 Writing inode tables: 0/1 done 00:23:10.640 Creating journal (1024 blocks): done 00:23:10.640 Writing superblocks and filesystem accounting information: 0/1 done 00:23:10.640 00:23:10.640 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:10.641 09:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 75624 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 75624 ']' 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 75624 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75624 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:10.641 killing process with pid 75624 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75624' 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 75624 00:23:10.641 09:37:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 75624 00:23:12.020 09:37:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:12.020 00:23:12.020 real 0m9.854s 00:23:12.020 user 0m13.092s 00:23:12.020 sys 0m3.540s 00:23:12.020 ************************************ 00:23:12.020 END TEST bdev_nbd 00:23:12.020 ************************************ 00:23:12.020 09:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:12.020 09:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:12.020 09:37:12 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:12.020 09:37:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:23:12.020 09:37:12 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:23:12.020 09:37:12 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:23:12.020 09:37:12 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:12.020 09:37:12 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.020 09:37:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:12.020 ************************************ 00:23:12.020 START TEST bdev_fio 00:23:12.020 ************************************ 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:23:12.020 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:23:12.020 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:12.281 ************************************ 00:23:12.281 START TEST bdev_fio_rw_verify 00:23:12.281 ************************************ 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:12.281 09:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:23:12.541 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.541 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.541 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.541 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.541 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.541 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:23:12.541 fio-3.35 00:23:12.541 Starting 6 threads 00:23:24.750 00:23:24.750 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=76025: Thu Jul 25 09:37:23 2024 00:23:24.750 read: IOPS=34.3k, BW=134MiB/s (141MB/s)(1340MiB/10001msec) 00:23:24.750 slat (usec): min=2, max=4886, avg= 8.05, stdev=14.72 00:23:24.750 clat (usec): min=81, max=6632, avg=433.90, stdev=224.28 00:23:24.750 lat (usec): min=88, max=6639, avg=441.95, stdev=225.93 00:23:24.750 clat percentiles (usec): 00:23:24.750 | 50.000th=[ 396], 99.000th=[ 1057], 99.900th=[ 1565], 99.990th=[ 4752], 00:23:24.750 | 99.999th=[ 6587] 00:23:24.750 write: IOPS=34.6k, BW=135MiB/s (142MB/s)(1352MiB/10001msec); 0 zone resets 00:23:24.750 slat (usec): min=4, max=5145, avg=36.57, stdev=50.69 00:23:24.750 clat (usec): min=63, max=6745, avg=604.62, stdev=287.53 00:23:24.750 lat (usec): min=84, max=6774, avg=641.19, stdev=297.59 00:23:24.750 clat percentiles (usec): 00:23:24.750 | 50.000th=[ 570], 99.000th=[ 1385], 99.900th=[ 1991], 99.990th=[ 5014], 00:23:24.750 | 99.999th=[ 6521] 00:23:24.750 bw ( KiB/s): min=114807, max=162372, per=100.00%, avg=138454.63, stdev=2110.18, samples=114 00:23:24.750 iops : min=28701, max=40593, avg=34613.00, stdev=527.56, samples=114 00:23:24.750 lat (usec) : 100=0.01%, 250=13.98%, 500=39.55%, 750=28.82%, 1000=12.76% 00:23:24.750 lat (msec) : 2=4.81%, 4=0.05%, 10=0.03% 00:23:24.750 cpu : usr=52.59%, sys=26.56%, ctx=9634, majf=0, minf=28313 00:23:24.750 IO depths : 1=11.7%, 2=24.0%, 4=51.0%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:24.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.750 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:24.750 issued rwts: total=343147,346102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:24.750 latency : target=0, window=0, percentile=100.00%, depth=8 00:23:24.750 00:23:24.750 Run status group 0 (all jobs): 00:23:24.750 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=1340MiB (1406MB), run=10001-10001msec 00:23:24.750 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=1352MiB (1418MB), run=10001-10001msec 00:23:24.750 ----------------------------------------------------- 00:23:24.750 Suppressions used: 00:23:24.750 count bytes template 00:23:24.750 6 48 /usr/src/fio/parse.c 00:23:24.750 2704 259584 /usr/src/fio/iolog.c 00:23:24.750 1 8 libtcmalloc_minimal.so 00:23:24.750 1 904 libcrypto.so 00:23:24.750 ----------------------------------------------------- 00:23:24.750 00:23:24.750 00:23:24.750 real 0m12.431s 00:23:24.750 user 0m33.589s 00:23:24.750 sys 0m16.275s 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.750 ************************************ 00:23:24.750 END TEST bdev_fio_rw_verify 00:23:24.750 ************************************ 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:23:24.750 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7d9692b6-b7b9-4885-8277-a6025991233c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7d9692b6-b7b9-4885-8277-a6025991233c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "973dacfd-756c-4c84-8bf9-cf88028c2b5e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "973dacfd-756c-4c84-8bf9-cf88028c2b5e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "703dff37-3fab-46af-98cf-e677b8b989f5"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "703dff37-3fab-46af-98cf-e677b8b989f5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "ddbb7183-410a-4849-b583-3fa71fc7b1ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddbb7183-410a-4849-b583-3fa71fc7b1ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "16d0a7f5-9a7d-4ce6-96fd-7086ecbd3337"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "16d0a7f5-9a7d-4ce6-96fd-7086ecbd3337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "273b9751-ee01-45f3-be26-71ddafca487b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "273b9751-ee01-45f3-be26-71ddafca487b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:23:24.751 /home/vagrant/spdk_repo/spdk 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:23:24.751 00:23:24.751 real 0m12.638s 00:23:24.751 user 0m33.708s 00:23:24.751 sys 0m16.368s 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:24.751 09:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:23:24.751 ************************************ 00:23:24.751 END TEST bdev_fio 00:23:24.751 ************************************ 00:23:24.751 09:37:25 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:24.751 09:37:25 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:24.751 09:37:25 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:23:24.751 09:37:25 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.751 09:37:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:24.751 ************************************ 00:23:24.751 START TEST bdev_verify 00:23:24.751 ************************************ 00:23:24.751 09:37:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:25.011 [2024-07-25 09:37:25.384311] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:25.011 [2024-07-25 09:37:25.384415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76196 ] 00:23:25.011 [2024-07-25 09:37:25.546602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:25.270 [2024-07-25 09:37:25.769123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.270 [2024-07-25 09:37:25.769152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.837 Running I/O for 5 seconds... 00:23:31.112 00:23:31.112 Latency(us) 00:23:31.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.112 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x0 length 0xa0000 00:23:31.112 nvme0n1 : 5.03 1960.27 7.66 0.00 0.00 65193.03 10417.08 64105.08 00:23:31.112 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0xa0000 length 0xa0000 00:23:31.112 nvme0n1 : 5.05 1976.48 7.72 0.00 0.00 64654.36 7841.43 59984.04 00:23:31.112 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x0 length 0xbd0bd 00:23:31.112 nvme1n1 : 5.05 2853.12 11.15 0.00 0.00 44672.50 5494.72 52886.69 00:23:31.112 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:31.112 nvme1n1 : 5.05 2747.63 10.73 0.00 0.00 46317.30 4664.79 51741.96 00:23:31.112 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x0 length 0x80000 00:23:31.112 nvme2n1 : 5.05 1977.36 7.72 0.00 0.00 64452.55 8299.32 59526.15 00:23:31.112 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x80000 length 0x80000 00:23:31.112 nvme2n1 : 5.06 1997.70 7.80 0.00 0.00 63616.82 5437.48 61815.62 00:23:31.112 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x0 length 0x80000 00:23:31.112 nvme2n2 : 5.05 1978.91 7.73 0.00 0.00 64258.07 8356.56 67310.34 00:23:31.112 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x80000 length 0x80000 00:23:31.112 nvme2n2 : 5.06 1999.23 7.81 0.00 0.00 63428.38 8413.79 58152.47 00:23:31.112 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x0 length 0x80000 00:23:31.112 nvme2n3 : 5.05 1976.46 7.72 0.00 0.00 64246.48 10703.26 59526.15 00:23:31.112 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x80000 length 0x80000 00:23:31.112 nvme2n3 : 5.06 1998.73 7.81 0.00 0.00 63372.21 8928.92 54718.27 00:23:31.112 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x0 length 0x20000 00:23:31.112 nvme3n1 : 5.06 1974.69 7.71 0.00 0.00 64213.55 4206.90 61815.62 00:23:31.112 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:31.112 Verification LBA range: start 0x20000 length 0x20000 00:23:31.112 nvme3n1 : 5.06 1997.24 7.80 0.00 0.00 63387.11 5780.90 56778.79 00:23:31.112 =================================================================================================================== 00:23:31.112 Total : 25437.82 99.37 0.00 0.00 59983.92 4206.90 67310.34 00:23:32.052 00:23:32.052 real 0m7.351s 00:23:32.052 user 0m11.429s 00:23:32.052 sys 0m1.844s 00:23:32.052 09:37:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:32.052 09:37:32 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:32.052 ************************************ 00:23:32.052 END TEST bdev_verify 00:23:32.052 ************************************ 00:23:32.311 09:37:32 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:32.311 09:37:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:23:32.311 09:37:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:32.311 09:37:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:32.311 ************************************ 00:23:32.311 START TEST bdev_verify_big_io 00:23:32.311 ************************************ 00:23:32.311 09:37:32 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:32.311 [2024-07-25 09:37:32.795733] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:32.311 [2024-07-25 09:37:32.795835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76301 ] 00:23:32.569 [2024-07-25 09:37:32.958442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:32.569 [2024-07-25 09:37:33.181255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.569 [2024-07-25 09:37:33.181364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.508 Running I/O for 5 seconds... 00:23:40.074 00:23:40.074 Latency(us) 00:23:40.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.074 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x0 length 0xa000 00:23:40.074 nvme0n1 : 5.62 159.43 9.96 0.00 0.00 780177.87 168504.79 754608.41 00:23:40.074 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0xa000 length 0xa000 00:23:40.074 nvme0n1 : 5.68 174.67 10.92 0.00 0.00 714935.08 133704.89 820545.06 00:23:40.074 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x0 length 0xbd0b 00:23:40.074 nvme1n1 : 5.62 193.51 12.09 0.00 0.00 627223.54 71431.38 644713.98 00:23:40.074 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:40.074 nvme1n1 : 5.62 176.79 11.05 0.00 0.00 694250.39 11447.34 915786.90 00:23:40.074 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x0 length 0x8000 00:23:40.074 nvme2n1 : 5.62 167.83 10.49 0.00 0.00 704424.45 167589.00 688671.75 00:23:40.074 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x8000 length 0x8000 00:23:40.074 nvme2n1 : 5.74 188.14 11.76 0.00 0.00 624211.24 13336.15 787576.73 00:23:40.074 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x0 length 0x8000 00:23:40.074 nvme2n2 : 5.68 180.22 11.26 0.00 0.00 644864.22 48307.76 1025681.33 00:23:40.074 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x8000 length 0x8000 00:23:40.074 nvme2n2 : 5.68 157.69 9.86 0.00 0.00 731636.05 84252.39 593429.91 00:23:40.074 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x0 length 0x8000 00:23:40.074 nvme2n3 : 5.75 128.06 8.00 0.00 0.00 883003.66 75094.53 1370017.20 00:23:40.074 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x8000 length 0x8000 00:23:40.074 nvme2n3 : 5.74 94.72 5.92 0.00 0.00 1190205.96 13336.15 2432330.01 00:23:40.074 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x0 length 0x2000 00:23:40.074 nvme3n1 : 5.75 182.21 11.39 0.00 0.00 616726.95 1802.96 1772963.44 00:23:40.074 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:40.074 Verification LBA range: start 0x2000 length 0x2000 00:23:40.074 nvme3n1 : 5.80 183.43 11.46 0.00 0.00 604907.89 736.92 1289427.95 00:23:40.074 =================================================================================================================== 00:23:40.074 Total : 1986.72 124.17 0.00 0.00 709632.12 736.92 2432330.01 00:23:40.642 00:23:40.643 real 0m8.446s 00:23:40.643 user 0m15.058s 00:23:40.643 sys 0m0.608s 00:23:40.643 09:37:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:40.643 ************************************ 00:23:40.643 END TEST bdev_verify_big_io 00:23:40.643 ************************************ 00:23:40.643 09:37:41 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:40.643 09:37:41 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:40.643 09:37:41 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:23:40.643 09:37:41 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:40.643 09:37:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:40.643 ************************************ 00:23:40.643 START TEST bdev_write_zeroes 00:23:40.643 ************************************ 00:23:40.643 09:37:41 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:40.903 [2024-07-25 09:37:41.283626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:40.903 [2024-07-25 09:37:41.283734] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:23:40.903 [2024-07-25 09:37:41.444565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.163 [2024-07-25 09:37:41.673310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.732 Running I/O for 1 seconds... 00:23:42.669 00:23:42.669 Latency(us) 00:23:42.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.669 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.669 nvme0n1 : 1.01 14822.51 57.90 0.00 0.00 8626.32 7383.53 20948.63 00:23:42.669 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.669 nvme1n1 : 1.01 15313.82 59.82 0.00 0.00 8328.75 5294.39 16598.64 00:23:42.669 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.669 nvme2n1 : 1.01 14792.34 57.78 0.00 0.00 8586.58 7097.35 21177.57 00:23:42.669 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.669 nvme2n2 : 1.01 14777.14 57.72 0.00 0.00 8590.29 7097.35 21520.99 00:23:42.669 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.669 nvme2n3 : 1.01 14761.81 57.66 0.00 0.00 8595.58 7097.35 21749.94 00:23:42.669 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:23:42.669 nvme3n1 : 1.02 14746.72 57.60 0.00 0.00 8600.03 7240.44 21978.89 00:23:42.669 =================================================================================================================== 00:23:42.669 Total : 89214.34 348.49 0.00 0.00 8553.31 5294.39 21978.89 00:23:44.050 00:23:44.051 real 0m3.251s 00:23:44.051 user 0m2.497s 00:23:44.051 sys 0m0.598s 00:23:44.051 09:37:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.051 09:37:44 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:23:44.051 ************************************ 00:23:44.051 END TEST bdev_write_zeroes 00:23:44.051 ************************************ 00:23:44.051 09:37:44 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:44.051 09:37:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:23:44.051 09:37:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.051 09:37:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:44.051 ************************************ 00:23:44.051 START TEST bdev_json_nonenclosed 00:23:44.051 ************************************ 00:23:44.051 09:37:44 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:44.051 [2024-07-25 09:37:44.601701] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:44.051 [2024-07-25 09:37:44.601801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76482 ] 00:23:44.310 [2024-07-25 09:37:44.764191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.569 [2024-07-25 09:37:44.983653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.569 [2024-07-25 09:37:44.983748] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:23:44.569 [2024-07-25 09:37:44.983769] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:44.569 [2024-07-25 09:37:44.983779] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:44.828 00:23:44.828 real 0m0.904s 00:23:44.828 user 0m0.686s 00:23:44.828 sys 0m0.112s 00:23:44.828 09:37:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.828 09:37:45 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:23:44.828 ************************************ 00:23:44.828 END TEST bdev_json_nonenclosed 00:23:44.828 ************************************ 00:23:45.088 09:37:45 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:45.088 09:37:45 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:23:45.088 09:37:45 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:45.088 09:37:45 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:23:45.088 ************************************ 00:23:45.088 START TEST bdev_json_nonarray 00:23:45.088 ************************************ 00:23:45.088 09:37:45 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:45.088 [2024-07-25 09:37:45.564785] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:45.088 [2024-07-25 09:37:45.564886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76513 ] 00:23:45.348 [2024-07-25 09:37:45.726651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.348 [2024-07-25 09:37:45.943326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.348 [2024-07-25 09:37:45.943418] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:23:45.348 [2024-07-25 09:37:45.943439] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:23:45.348 [2024-07-25 09:37:45.943449] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:45.917 00:23:45.917 real 0m0.892s 00:23:45.917 user 0m0.665s 00:23:45.917 sys 0m0.123s 00:23:45.917 09:37:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:45.917 09:37:46 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:23:45.917 ************************************ 00:23:45.917 END TEST bdev_json_nonarray 00:23:45.917 ************************************ 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:23:45.917 09:37:46 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:46.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.368 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:01.368 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:13.617 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:24:13.617 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:24:13.617 00:24:13.617 real 1m25.473s 00:24:13.617 user 1m38.373s 00:24:13.617 sys 1m12.947s 00:24:13.617 09:38:12 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:13.617 09:38:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:24:13.617 ************************************ 00:24:13.617 END TEST blockdev_xnvme 00:24:13.617 ************************************ 00:24:13.617 09:38:12 -- spdk/autotest.sh@255 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:24:13.617 09:38:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:13.617 09:38:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.617 09:38:12 -- common/autotest_common.sh@10 -- # set +x 00:24:13.617 ************************************ 00:24:13.617 START TEST ublk 00:24:13.617 ************************************ 00:24:13.617 09:38:12 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:24:13.617 * Looking for test storage... 00:24:13.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:13.617 09:38:12 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:13.617 09:38:12 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:13.617 09:38:12 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:13.617 09:38:12 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:13.617 09:38:12 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:13.617 09:38:12 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:13.617 09:38:12 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:13.617 09:38:12 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:24:13.617 09:38:12 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:24:13.617 09:38:12 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:13.617 09:38:12 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:13.617 09:38:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:13.617 ************************************ 00:24:13.617 START TEST test_save_ublk_config 00:24:13.617 ************************************ 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=77052 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 77052 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 77052 ']' 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.617 09:38:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:13.617 [2024-07-25 09:38:13.016300] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:13.617 [2024-07-25 09:38:13.016419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77052 ] 00:24:13.617 [2024-07-25 09:38:13.179159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.617 [2024-07-25 09:38:13.422423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.889 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.889 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:24:13.889 09:38:14 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:24:13.889 09:38:14 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:24:13.889 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.889 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:13.889 [2024-07-25 09:38:14.349249] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:13.889 [2024-07-25 09:38:14.350558] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:13.889 malloc0 00:24:13.889 [2024-07-25 09:38:14.437622] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:13.889 [2024-07-25 09:38:14.437707] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:13.889 [2024-07-25 09:38:14.437717] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:13.889 [2024-07-25 09:38:14.437725] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:13.889 [2024-07-25 09:38:14.444272] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:13.889 [2024-07-25 09:38:14.444305] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:13.889 [2024-07-25 09:38:14.452259] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:13.890 [2024-07-25 09:38:14.452389] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:13.890 [2024-07-25 09:38:14.476283] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:13.890 0 00:24:13.890 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.890 09:38:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:24:13.890 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.890 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:14.150 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.150 09:38:14 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:24:14.150 "subsystems": [ 00:24:14.150 { 00:24:14.150 "subsystem": "keyring", 00:24:14.150 "config": [] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "iobuf", 00:24:14.150 "config": [ 00:24:14.150 { 00:24:14.150 "method": "iobuf_set_options", 00:24:14.150 "params": { 00:24:14.150 "small_pool_count": 8192, 00:24:14.150 "large_pool_count": 1024, 00:24:14.150 "small_bufsize": 8192, 00:24:14.150 "large_bufsize": 135168 00:24:14.150 } 00:24:14.150 } 00:24:14.150 ] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "sock", 00:24:14.150 "config": [ 00:24:14.150 { 00:24:14.150 "method": "sock_set_default_impl", 00:24:14.150 "params": { 00:24:14.150 "impl_name": "posix" 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "sock_impl_set_options", 00:24:14.150 "params": { 00:24:14.150 "impl_name": "ssl", 00:24:14.150 "recv_buf_size": 4096, 00:24:14.150 "send_buf_size": 4096, 00:24:14.150 "enable_recv_pipe": true, 00:24:14.150 "enable_quickack": false, 00:24:14.150 "enable_placement_id": 0, 00:24:14.150 "enable_zerocopy_send_server": true, 00:24:14.150 "enable_zerocopy_send_client": false, 00:24:14.150 "zerocopy_threshold": 0, 00:24:14.150 "tls_version": 0, 00:24:14.150 "enable_ktls": false 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "sock_impl_set_options", 00:24:14.150 "params": { 00:24:14.150 "impl_name": "posix", 00:24:14.150 "recv_buf_size": 2097152, 00:24:14.150 "send_buf_size": 2097152, 00:24:14.150 "enable_recv_pipe": true, 00:24:14.150 "enable_quickack": false, 00:24:14.150 "enable_placement_id": 0, 00:24:14.150 "enable_zerocopy_send_server": true, 00:24:14.150 "enable_zerocopy_send_client": false, 00:24:14.150 "zerocopy_threshold": 0, 00:24:14.150 "tls_version": 0, 00:24:14.150 "enable_ktls": false 00:24:14.150 } 00:24:14.150 } 00:24:14.150 ] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "vmd", 00:24:14.150 "config": [] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "accel", 00:24:14.150 "config": [ 00:24:14.150 { 00:24:14.150 "method": "accel_set_options", 00:24:14.150 "params": { 00:24:14.150 "small_cache_size": 128, 00:24:14.150 "large_cache_size": 16, 00:24:14.150 "task_count": 2048, 00:24:14.150 "sequence_count": 2048, 00:24:14.150 "buf_count": 2048 00:24:14.150 } 00:24:14.150 } 00:24:14.150 ] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "bdev", 00:24:14.150 "config": [ 00:24:14.150 { 00:24:14.150 "method": "bdev_set_options", 00:24:14.150 "params": { 00:24:14.150 "bdev_io_pool_size": 65535, 00:24:14.150 "bdev_io_cache_size": 256, 00:24:14.150 "bdev_auto_examine": true, 00:24:14.150 "iobuf_small_cache_size": 128, 00:24:14.150 "iobuf_large_cache_size": 16 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "bdev_raid_set_options", 00:24:14.150 "params": { 00:24:14.150 "process_window_size_kb": 1024, 00:24:14.150 "process_max_bandwidth_mb_sec": 0 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "bdev_iscsi_set_options", 00:24:14.150 "params": { 00:24:14.150 "timeout_sec": 30 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "bdev_nvme_set_options", 00:24:14.150 "params": { 00:24:14.150 "action_on_timeout": "none", 00:24:14.150 "timeout_us": 0, 00:24:14.150 "timeout_admin_us": 0, 00:24:14.150 "keep_alive_timeout_ms": 10000, 00:24:14.150 "arbitration_burst": 0, 00:24:14.150 "low_priority_weight": 0, 00:24:14.150 "medium_priority_weight": 0, 00:24:14.150 "high_priority_weight": 0, 00:24:14.150 "nvme_adminq_poll_period_us": 10000, 00:24:14.150 "nvme_ioq_poll_period_us": 0, 00:24:14.150 "io_queue_requests": 0, 00:24:14.150 "delay_cmd_submit": true, 00:24:14.150 "transport_retry_count": 4, 00:24:14.150 "bdev_retry_count": 3, 00:24:14.150 "transport_ack_timeout": 0, 00:24:14.150 "ctrlr_loss_timeout_sec": 0, 00:24:14.150 "reconnect_delay_sec": 0, 00:24:14.150 "fast_io_fail_timeout_sec": 0, 00:24:14.150 "disable_auto_failback": false, 00:24:14.150 "generate_uuids": false, 00:24:14.150 "transport_tos": 0, 00:24:14.150 "nvme_error_stat": false, 00:24:14.150 "rdma_srq_size": 0, 00:24:14.150 "io_path_stat": false, 00:24:14.150 "allow_accel_sequence": false, 00:24:14.150 "rdma_max_cq_size": 0, 00:24:14.150 "rdma_cm_event_timeout_ms": 0, 00:24:14.150 "dhchap_digests": [ 00:24:14.150 "sha256", 00:24:14.150 "sha384", 00:24:14.150 "sha512" 00:24:14.150 ], 00:24:14.150 "dhchap_dhgroups": [ 00:24:14.150 "null", 00:24:14.150 "ffdhe2048", 00:24:14.150 "ffdhe3072", 00:24:14.150 "ffdhe4096", 00:24:14.150 "ffdhe6144", 00:24:14.150 "ffdhe8192" 00:24:14.150 ] 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "bdev_nvme_set_hotplug", 00:24:14.150 "params": { 00:24:14.150 "period_us": 100000, 00:24:14.150 "enable": false 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "bdev_malloc_create", 00:24:14.150 "params": { 00:24:14.150 "name": "malloc0", 00:24:14.150 "num_blocks": 8192, 00:24:14.150 "block_size": 4096, 00:24:14.150 "physical_block_size": 4096, 00:24:14.150 "uuid": "34d33771-2407-4a53-bfa8-4b596468f7aa", 00:24:14.150 "optimal_io_boundary": 0, 00:24:14.150 "md_size": 0, 00:24:14.150 "dif_type": 0, 00:24:14.150 "dif_is_head_of_md": false, 00:24:14.150 "dif_pi_format": 0 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "bdev_wait_for_examine" 00:24:14.150 } 00:24:14.150 ] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "scsi", 00:24:14.150 "config": null 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "scheduler", 00:24:14.150 "config": [ 00:24:14.150 { 00:24:14.150 "method": "framework_set_scheduler", 00:24:14.150 "params": { 00:24:14.150 "name": "static" 00:24:14.150 } 00:24:14.150 } 00:24:14.150 ] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "vhost_scsi", 00:24:14.150 "config": [] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "vhost_blk", 00:24:14.150 "config": [] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "ublk", 00:24:14.150 "config": [ 00:24:14.150 { 00:24:14.150 "method": "ublk_create_target", 00:24:14.150 "params": { 00:24:14.150 "cpumask": "1" 00:24:14.150 } 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "method": "ublk_start_disk", 00:24:14.150 "params": { 00:24:14.150 "bdev_name": "malloc0", 00:24:14.150 "ublk_id": 0, 00:24:14.150 "num_queues": 1, 00:24:14.150 "queue_depth": 128 00:24:14.150 } 00:24:14.150 } 00:24:14.150 ] 00:24:14.150 }, 00:24:14.150 { 00:24:14.150 "subsystem": "nbd", 00:24:14.150 "config": [] 00:24:14.151 }, 00:24:14.151 { 00:24:14.151 "subsystem": "nvmf", 00:24:14.151 "config": [ 00:24:14.151 { 00:24:14.151 "method": "nvmf_set_config", 00:24:14.151 "params": { 00:24:14.151 "discovery_filter": "match_any", 00:24:14.151 "admin_cmd_passthru": { 00:24:14.151 "identify_ctrlr": false 00:24:14.151 } 00:24:14.151 } 00:24:14.151 }, 00:24:14.151 { 00:24:14.151 "method": "nvmf_set_max_subsystems", 00:24:14.151 "params": { 00:24:14.151 "max_subsystems": 1024 00:24:14.151 } 00:24:14.151 }, 00:24:14.151 { 00:24:14.151 "method": "nvmf_set_crdt", 00:24:14.151 "params": { 00:24:14.151 "crdt1": 0, 00:24:14.151 "crdt2": 0, 00:24:14.151 "crdt3": 0 00:24:14.151 } 00:24:14.151 } 00:24:14.151 ] 00:24:14.151 }, 00:24:14.151 { 00:24:14.151 "subsystem": "iscsi", 00:24:14.151 "config": [ 00:24:14.151 { 00:24:14.151 "method": "iscsi_set_options", 00:24:14.151 "params": { 00:24:14.151 "node_base": "iqn.2016-06.io.spdk", 00:24:14.151 "max_sessions": 128, 00:24:14.151 "max_connections_per_session": 2, 00:24:14.151 "max_queue_depth": 64, 00:24:14.151 "default_time2wait": 2, 00:24:14.151 "default_time2retain": 20, 00:24:14.151 "first_burst_length": 8192, 00:24:14.151 "immediate_data": true, 00:24:14.151 "allow_duplicated_isid": false, 00:24:14.151 "error_recovery_level": 0, 00:24:14.151 "nop_timeout": 60, 00:24:14.151 "nop_in_interval": 30, 00:24:14.151 "disable_chap": false, 00:24:14.151 "require_chap": false, 00:24:14.151 "mutual_chap": false, 00:24:14.151 "chap_group": 0, 00:24:14.151 "max_large_datain_per_connection": 64, 00:24:14.151 "max_r2t_per_connection": 4, 00:24:14.151 "pdu_pool_size": 36864, 00:24:14.151 "immediate_data_pool_size": 16384, 00:24:14.151 "data_out_pool_size": 2048 00:24:14.151 } 00:24:14.151 } 00:24:14.151 ] 00:24:14.151 } 00:24:14.151 ] 00:24:14.151 }' 00:24:14.151 09:38:14 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 77052 00:24:14.151 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 77052 ']' 00:24:14.151 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 77052 00:24:14.151 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:24:14.151 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.151 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77052 00:24:14.408 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:14.408 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:14.408 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77052' 00:24:14.408 killing process with pid 77052 00:24:14.408 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 77052 00:24:14.408 09:38:14 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 77052 00:24:15.801 [2024-07-25 09:38:16.223243] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:15.801 [2024-07-25 09:38:16.269299] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:15.801 [2024-07-25 09:38:16.269450] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:15.801 [2024-07-25 09:38:16.278269] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:15.801 [2024-07-25 09:38:16.278325] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:15.801 [2024-07-25 09:38:16.278333] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:15.801 [2024-07-25 09:38:16.278356] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:24:15.801 [2024-07-25 09:38:16.282365] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=77118 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 77118 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 77118 ']' 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.181 09:38:17 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:24:17.181 "subsystems": [ 00:24:17.181 { 00:24:17.181 "subsystem": "keyring", 00:24:17.181 "config": [] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "iobuf", 00:24:17.181 "config": [ 00:24:17.181 { 00:24:17.181 "method": "iobuf_set_options", 00:24:17.181 "params": { 00:24:17.181 "small_pool_count": 8192, 00:24:17.181 "large_pool_count": 1024, 00:24:17.181 "small_bufsize": 8192, 00:24:17.181 "large_bufsize": 135168 00:24:17.181 } 00:24:17.181 } 00:24:17.181 ] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "sock", 00:24:17.181 "config": [ 00:24:17.181 { 00:24:17.181 "method": "sock_set_default_impl", 00:24:17.181 "params": { 00:24:17.181 "impl_name": "posix" 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "sock_impl_set_options", 00:24:17.181 "params": { 00:24:17.181 "impl_name": "ssl", 00:24:17.181 "recv_buf_size": 4096, 00:24:17.181 "send_buf_size": 4096, 00:24:17.181 "enable_recv_pipe": true, 00:24:17.181 "enable_quickack": false, 00:24:17.181 "enable_placement_id": 0, 00:24:17.181 "enable_zerocopy_send_server": true, 00:24:17.181 "enable_zerocopy_send_client": false, 00:24:17.181 "zerocopy_threshold": 0, 00:24:17.181 "tls_version": 0, 00:24:17.181 "enable_ktls": false 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "sock_impl_set_options", 00:24:17.181 "params": { 00:24:17.181 "impl_name": "posix", 00:24:17.181 "recv_buf_size": 2097152, 00:24:17.181 "send_buf_size": 2097152, 00:24:17.181 "enable_recv_pipe": true, 00:24:17.181 "enable_quickack": false, 00:24:17.181 "enable_placement_id": 0, 00:24:17.181 "enable_zerocopy_send_server": true, 00:24:17.181 "enable_zerocopy_send_client": false, 00:24:17.181 "zerocopy_threshold": 0, 00:24:17.181 "tls_version": 0, 00:24:17.181 "enable_ktls": false 00:24:17.181 } 00:24:17.181 } 00:24:17.181 ] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "vmd", 00:24:17.181 "config": [] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "accel", 00:24:17.181 "config": [ 00:24:17.181 { 00:24:17.181 "method": "accel_set_options", 00:24:17.181 "params": { 00:24:17.181 "small_cache_size": 128, 00:24:17.181 "large_cache_size": 16, 00:24:17.181 "task_count": 2048, 00:24:17.181 "sequence_count": 2048, 00:24:17.181 "buf_count": 2048 00:24:17.181 } 00:24:17.181 } 00:24:17.181 ] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "bdev", 00:24:17.181 "config": [ 00:24:17.181 { 00:24:17.181 "method": "bdev_set_options", 00:24:17.181 "params": { 00:24:17.181 "bdev_io_pool_size": 65535, 00:24:17.181 "bdev_io_cache_size": 256, 00:24:17.181 "bdev_auto_examine": true, 00:24:17.181 "iobuf_small_cache_size": 128, 00:24:17.181 "iobuf_large_cache_size": 16 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "bdev_raid_set_options", 00:24:17.181 "params": { 00:24:17.181 "process_window_size_kb": 1024, 00:24:17.181 "process_max_bandwidth_mb_sec": 0 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "bdev_iscsi_set_options", 00:24:17.181 "params": { 00:24:17.181 "timeout_sec": 30 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "bdev_nvme_set_options", 00:24:17.181 "params": { 00:24:17.181 "action_on_timeout": "none", 00:24:17.181 "timeout_us": 0, 00:24:17.181 "timeout_admin_us": 0, 00:24:17.181 "keep_alive_timeout_ms": 10000, 00:24:17.181 "arbitration_burst": 0, 00:24:17.181 "low_priority_weight": 0, 00:24:17.181 "medium_priority_weight": 0, 00:24:17.181 "high_priority_weight": 0, 00:24:17.181 "nvme_adminq_poll_period_us": 10000, 00:24:17.181 "nvme_ioq_poll_period_us": 0, 00:24:17.181 "io_queue_requests": 0, 00:24:17.181 "delay_cmd_submit": true, 00:24:17.181 "transport_retry_count": 4, 00:24:17.181 "bdev_retry_count": 3, 00:24:17.181 "transport_ack_timeout": 0, 00:24:17.181 "ctrlr_loss_timeout_sec": 0, 00:24:17.181 "reconnect_delay_sec": 0, 00:24:17.181 "fast_io_fail_timeout_sec": 0, 00:24:17.181 "disable_auto_failback": false, 00:24:17.181 "generate_uuids": false, 00:24:17.181 "transport_tos": 0, 00:24:17.181 "nvme_error_stat": false, 00:24:17.181 "rdma_srq_size": 0, 00:24:17.181 "io_path_stat": false, 00:24:17.181 "allow_accel_sequence": false, 00:24:17.181 "rdma_max_cq_size": 0, 00:24:17.181 "rdma_cm_event_timeout_ms": 0, 00:24:17.181 "dhchap_digests": [ 00:24:17.181 "sha256", 00:24:17.181 "sha384", 00:24:17.181 "sha512" 00:24:17.181 ], 00:24:17.181 "dhchap_dhgroups": [ 00:24:17.181 "null", 00:24:17.181 "ffdhe2048", 00:24:17.181 "ffdhe3072", 00:24:17.181 "ffdhe4096", 00:24:17.181 "ffdhe6144", 00:24:17.181 "ffdhe8192" 00:24:17.181 ] 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "bdev_nvme_set_hotplug", 00:24:17.181 "params": { 00:24:17.181 "period_us": 100000, 00:24:17.181 "enable": false 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "bdev_malloc_create", 00:24:17.181 "params": { 00:24:17.181 "name": "malloc0", 00:24:17.181 "num_blocks": 8192, 00:24:17.181 "block_size": 4096, 00:24:17.181 "physical_block_size": 4096, 00:24:17.181 "uuid": "34d33771-2407-4a53-bfa8-4b596468f7aa", 00:24:17.181 "optimal_io_boundary": 0, 00:24:17.181 "md_size": 0, 00:24:17.181 "dif_type": 0, 00:24:17.181 "dif_is_head_of_md": false, 00:24:17.181 "dif_pi_format": 0 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "bdev_wait_for_examine" 00:24:17.181 } 00:24:17.181 ] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "scsi", 00:24:17.181 "config": null 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "scheduler", 00:24:17.181 "config": [ 00:24:17.181 { 00:24:17.181 "method": "framework_set_scheduler", 00:24:17.181 "params": { 00:24:17.181 "name": "static" 00:24:17.181 } 00:24:17.181 } 00:24:17.181 ] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "vhost_scsi", 00:24:17.181 "config": [] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "vhost_blk", 00:24:17.181 "config": [] 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "subsystem": "ublk", 00:24:17.181 "config": [ 00:24:17.181 { 00:24:17.181 "method": "ublk_create_target", 00:24:17.181 "params": { 00:24:17.181 "cpumask": "1" 00:24:17.181 } 00:24:17.181 }, 00:24:17.181 { 00:24:17.181 "method": "ublk_start_disk", 00:24:17.181 "params": { 00:24:17.181 "bdev_name": "malloc0", 00:24:17.182 "ublk_id": 0, 00:24:17.182 "num_queues": 1, 00:24:17.182 "queue_depth": 128 00:24:17.182 } 00:24:17.182 } 00:24:17.182 ] 00:24:17.182 }, 00:24:17.182 { 00:24:17.182 "subsystem": "nbd", 00:24:17.182 "config": [] 00:24:17.182 }, 00:24:17.182 { 00:24:17.182 "subsystem": "nvmf", 00:24:17.182 "config": [ 00:24:17.182 { 00:24:17.182 "method": "nvmf_set_config", 00:24:17.182 "params": { 00:24:17.182 "discovery_filter": "match_any", 00:24:17.182 "admin_cmd_passthru": { 00:24:17.182 "identify_ctrlr": false 00:24:17.182 } 00:24:17.182 } 00:24:17.182 }, 00:24:17.182 { 00:24:17.182 "method": "nvmf_set_max_subsystems", 00:24:17.182 "params": { 00:24:17.182 "max_subsystems": 1024 00:24:17.182 } 00:24:17.182 }, 00:24:17.182 { 00:24:17.182 "method": "nvmf_set_crdt", 00:24:17.182 "params": { 00:24:17.182 "crdt1": 0, 00:24:17.182 "crdt2": 0, 00:24:17.182 "crdt3": 0 00:24:17.182 } 00:24:17.182 } 00:24:17.182 ] 00:24:17.182 }, 00:24:17.182 { 00:24:17.182 "subsystem": "iscsi", 00:24:17.182 "config": [ 00:24:17.182 { 00:24:17.182 "method": "iscsi_set_options", 00:24:17.182 "params": { 00:24:17.182 "node_base": "iqn.2016-06.io.spdk", 00:24:17.182 "max_sessions": 128, 00:24:17.182 "max_connections_per_session": 2, 00:24:17.182 "max_queue_depth": 64, 00:24:17.182 "default_time2wait": 2, 00:24:17.182 "default_time2retain": 20, 00:24:17.182 "first_burst_length": 8192, 00:24:17.182 "immediate_data": true, 00:24:17.182 "allow_duplicated_isid": false, 00:24:17.182 "error_recovery_level": 0, 00:24:17.182 "nop_timeout": 60, 00:24:17.182 "nop_in_interval": 30, 00:24:17.182 "disable_chap": false, 00:24:17.182 "require_chap": false, 00:24:17.182 "mutual_chap": false, 00:24:17.182 "chap_group": 0, 00:24:17.182 "max_large_datain_per_connection": 64, 00:24:17.182 "max_r2t_per_connection": 4, 00:24:17.182 "pdu_pool_size": 36864, 00:24:17.182 "immediate_data_pool_size": 16384, 00:24:17.182 "data_out_pool_size": 2048 00:24:17.182 } 00:24:17.182 } 00:24:17.182 ] 00:24:17.182 } 00:24:17.182 ] 00:24:17.182 }' 00:24:17.182 09:38:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:17.442 [2024-07-25 09:38:17.795869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:17.442 [2024-07-25 09:38:17.795990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77118 ] 00:24:17.442 [2024-07-25 09:38:17.956655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.701 [2024-07-25 09:38:18.195948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.641 [2024-07-25 09:38:19.213259] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:18.641 [2024-07-25 09:38:19.214470] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:18.641 [2024-07-25 09:38:19.221370] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:24:18.641 [2024-07-25 09:38:19.221438] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:24:18.641 [2024-07-25 09:38:19.221447] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:18.641 [2024-07-25 09:38:19.221453] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:18.641 [2024-07-25 09:38:19.230316] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:18.641 [2024-07-25 09:38:19.230338] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:18.641 [2024-07-25 09:38:19.237272] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:18.641 [2024-07-25 09:38:19.237352] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:18.901 [2024-07-25 09:38:19.254279] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 77118 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 77118 ']' 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 77118 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77118 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:18.901 killing process with pid 77118 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77118' 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 77118 00:24:18.901 09:38:19 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 77118 00:24:20.810 [2024-07-25 09:38:20.923264] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:20.810 [2024-07-25 09:38:20.960273] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:20.810 [2024-07-25 09:38:20.960425] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:20.810 [2024-07-25 09:38:20.970268] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:20.810 [2024-07-25 09:38:20.970348] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:20.810 [2024-07-25 09:38:20.970357] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:20.810 [2024-07-25 09:38:20.970382] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:24:20.810 [2024-07-25 09:38:20.970554] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:24:22.191 09:38:22 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:24:22.191 00:24:22.191 real 0m9.457s 00:24:22.191 user 0m8.012s 00:24:22.191 sys 0m2.051s 00:24:22.191 09:38:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:22.191 09:38:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:24:22.191 ************************************ 00:24:22.191 END TEST test_save_ublk_config 00:24:22.191 ************************************ 00:24:22.191 09:38:22 ublk -- ublk/ublk.sh@139 -- # spdk_pid=77196 00:24:22.191 09:38:22 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:22.191 09:38:22 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:22.191 09:38:22 ublk -- ublk/ublk.sh@141 -- # waitforlisten 77196 00:24:22.191 09:38:22 ublk -- common/autotest_common.sh@831 -- # '[' -z 77196 ']' 00:24:22.191 09:38:22 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.191 09:38:22 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.192 09:38:22 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.192 09:38:22 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.192 09:38:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:22.192 [2024-07-25 09:38:22.531437] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:22.192 [2024-07-25 09:38:22.531578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77196 ] 00:24:22.192 [2024-07-25 09:38:22.701378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:22.451 [2024-07-25 09:38:22.918718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.451 [2024-07-25 09:38:22.918754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.390 09:38:23 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.390 09:38:23 ublk -- common/autotest_common.sh@864 -- # return 0 00:24:23.390 09:38:23 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:24:23.390 09:38:23 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:23.390 09:38:23 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:23.390 09:38:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.390 ************************************ 00:24:23.390 START TEST test_create_ublk 00:24:23.390 ************************************ 00:24:23.390 09:38:23 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:24:23.390 09:38:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:24:23.390 09:38:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.390 09:38:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.390 [2024-07-25 09:38:23.843254] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:23.390 [2024-07-25 09:38:23.846363] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:23.390 09:38:23 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.390 09:38:23 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:24:23.390 09:38:23 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:24:23.390 09:38:23 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.390 09:38:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.649 [2024-07-25 09:38:24.172399] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:23.649 [2024-07-25 09:38:24.172758] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:23.649 [2024-07-25 09:38:24.172776] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:23.649 [2024-07-25 09:38:24.172785] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:23.649 [2024-07-25 09:38:24.180588] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:23.649 [2024-07-25 09:38:24.180615] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:23.649 [2024-07-25 09:38:24.188279] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:23.649 [2024-07-25 09:38:24.196443] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:23.649 [2024-07-25 09:38:24.217263] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:23.649 09:38:24 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:24:23.649 { 00:24:23.649 "ublk_device": "/dev/ublkb0", 00:24:23.649 "id": 0, 00:24:23.649 "queue_depth": 512, 00:24:23.649 "num_queues": 4, 00:24:23.649 "bdev_name": "Malloc0" 00:24:23.649 } 00:24:23.649 ]' 00:24:23.649 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:23.909 09:38:24 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:24:23.909 09:38:24 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:24:24.169 fio: verification read phase will never start because write phase uses all of runtime 00:24:24.169 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:24:24.169 fio-3.35 00:24:24.169 Starting 1 process 00:24:34.153 00:24:34.153 fio_test: (groupid=0, jobs=1): err= 0: pid=77250: Thu Jul 25 09:38:34 2024 00:24:34.153 write: IOPS=16.3k, BW=63.5MiB/s (66.6MB/s)(636MiB/10001msec); 0 zone resets 00:24:34.153 clat (usec): min=41, max=4004, avg=60.56, stdev=96.86 00:24:34.153 lat (usec): min=41, max=4004, avg=61.05, stdev=96.88 00:24:34.153 clat percentiles (usec): 00:24:34.153 | 1.00th=[ 45], 5.00th=[ 49], 10.00th=[ 50], 20.00th=[ 51], 00:24:34.153 | 30.00th=[ 52], 40.00th=[ 53], 50.00th=[ 55], 60.00th=[ 57], 00:24:34.153 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 68], 95.00th=[ 70], 00:24:34.153 | 99.00th=[ 79], 99.50th=[ 85], 99.90th=[ 2024], 99.95th=[ 2802], 00:24:34.153 | 99.99th=[ 3556] 00:24:34.153 bw ( KiB/s): min=56720, max=71248, per=99.62%, avg=64828.21, stdev=6579.10, samples=19 00:24:34.153 iops : min=14180, max=17812, avg=16207.05, stdev=1644.78, samples=19 00:24:34.153 lat (usec) : 50=12.39%, 100=87.38%, 250=0.05%, 500=0.01%, 750=0.01% 00:24:34.153 lat (usec) : 1000=0.01% 00:24:34.153 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:24:34.153 cpu : usr=2.48%, sys=10.04%, ctx=162703, majf=0, minf=794 00:24:34.153 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.153 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.153 issued rwts: total=0,162702,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.153 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:34.153 00:24:34.153 Run status group 0 (all jobs): 00:24:34.153 WRITE: bw=63.5MiB/s (66.6MB/s), 63.5MiB/s-63.5MiB/s (66.6MB/s-66.6MB/s), io=636MiB (666MB), run=10001-10001msec 00:24:34.153 00:24:34.153 Disk stats (read/write): 00:24:34.153 ublkb0: ios=0/160887, merge=0/0, ticks=0/8667, in_queue=8667, util=99.04% 00:24:34.153 09:38:34 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:24:34.153 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.153 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.153 [2024-07-25 09:38:34.730280] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:34.413 [2024-07-25 09:38:34.770714] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:34.413 [2024-07-25 09:38:34.775374] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:34.413 [2024-07-25 09:38:34.783342] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:34.413 [2024-07-25 09:38:34.783667] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:34.413 [2024-07-25 09:38:34.783685] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.413 09:38:34 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.413 [2024-07-25 09:38:34.800366] ublk.c:1053:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:24:34.413 request: 00:24:34.413 { 00:24:34.413 "ublk_id": 0, 00:24:34.413 "method": "ublk_stop_disk", 00:24:34.413 "req_id": 1 00:24:34.413 } 00:24:34.413 Got JSON-RPC error response 00:24:34.413 response: 00:24:34.413 { 00:24:34.413 "code": -19, 00:24:34.413 "message": "No such device" 00:24:34.413 } 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:34.413 09:38:34 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.413 [2024-07-25 09:38:34.824323] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:24:34.413 [2024-07-25 09:38:34.832259] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:24:34.413 [2024-07-25 09:38:34.832295] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.413 09:38:34 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.413 09:38:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.673 09:38:35 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.673 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:34.673 09:38:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:24:34.933 ************************************ 00:24:34.933 END TEST test_create_ublk 00:24:34.933 ************************************ 00:24:34.933 09:38:35 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:34.933 00:24:34.933 real 0m11.483s 00:24:34.933 user 0m0.633s 00:24:34.933 sys 0m1.129s 00:24:34.933 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:34.933 09:38:35 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.933 09:38:35 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:24:34.933 09:38:35 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:34.933 09:38:35 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:34.933 09:38:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.933 ************************************ 00:24:34.933 START TEST test_create_multi_ublk 00:24:34.933 ************************************ 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:34.933 [2024-07-25 09:38:35.379271] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:34.933 [2024-07-25 09:38:35.382216] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.933 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:35.193 [2024-07-25 09:38:35.700411] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:24:35.193 [2024-07-25 09:38:35.700832] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:24:35.193 [2024-07-25 09:38:35.700851] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:24:35.193 [2024-07-25 09:38:35.700859] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:24:35.193 [2024-07-25 09:38:35.709514] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:35.193 [2024-07-25 09:38:35.709536] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:35.193 [2024-07-25 09:38:35.716268] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:35.193 [2024-07-25 09:38:35.716835] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:24:35.193 [2024-07-25 09:38:35.738285] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.193 09:38:35 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:35.453 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.453 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:24:35.453 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:24:35.453 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.453 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:35.453 [2024-07-25 09:38:36.065416] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:24:35.453 [2024-07-25 09:38:36.065788] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:24:35.453 [2024-07-25 09:38:36.065804] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:35.453 [2024-07-25 09:38:36.065813] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:35.713 [2024-07-25 09:38:36.073300] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:35.713 [2024-07-25 09:38:36.073327] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:35.713 [2024-07-25 09:38:36.081270] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:35.713 [2024-07-25 09:38:36.081879] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:35.713 [2024-07-25 09:38:36.090298] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:35.713 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.713 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:24:35.713 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:35.713 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:24:35.713 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.713 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:35.973 [2024-07-25 09:38:36.423414] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:24:35.973 [2024-07-25 09:38:36.423784] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:24:35.973 [2024-07-25 09:38:36.423805] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:24:35.973 [2024-07-25 09:38:36.423812] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:24:35.973 [2024-07-25 09:38:36.431281] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:35.973 [2024-07-25 09:38:36.431300] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:35.973 [2024-07-25 09:38:36.439281] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:35.973 [2024-07-25 09:38:36.439841] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:24:35.973 [2024-07-25 09:38:36.442580] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.973 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:36.233 [2024-07-25 09:38:36.782387] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:24:36.233 [2024-07-25 09:38:36.782764] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:24:36.233 [2024-07-25 09:38:36.782780] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:24:36.233 [2024-07-25 09:38:36.782790] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:24:36.233 [2024-07-25 09:38:36.786758] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:36.233 [2024-07-25 09:38:36.786784] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:36.233 [2024-07-25 09:38:36.797276] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:36.233 [2024-07-25 09:38:36.797846] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:24:36.233 [2024-07-25 09:38:36.802667] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:36.233 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:24:36.233 { 00:24:36.233 "ublk_device": "/dev/ublkb0", 00:24:36.233 "id": 0, 00:24:36.234 "queue_depth": 512, 00:24:36.234 "num_queues": 4, 00:24:36.234 "bdev_name": "Malloc0" 00:24:36.234 }, 00:24:36.234 { 00:24:36.234 "ublk_device": "/dev/ublkb1", 00:24:36.234 "id": 1, 00:24:36.234 "queue_depth": 512, 00:24:36.234 "num_queues": 4, 00:24:36.234 "bdev_name": "Malloc1" 00:24:36.234 }, 00:24:36.234 { 00:24:36.234 "ublk_device": "/dev/ublkb2", 00:24:36.234 "id": 2, 00:24:36.234 "queue_depth": 512, 00:24:36.234 "num_queues": 4, 00:24:36.234 "bdev_name": "Malloc2" 00:24:36.234 }, 00:24:36.234 { 00:24:36.234 "ublk_device": "/dev/ublkb3", 00:24:36.234 "id": 3, 00:24:36.234 "queue_depth": 512, 00:24:36.234 "num_queues": 4, 00:24:36.234 "bdev_name": "Malloc3" 00:24:36.234 } 00:24:36.234 ]' 00:24:36.234 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:24:36.234 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:36.234 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:24:36.493 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:24:36.493 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:24:36.493 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:24:36.493 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:24:36.493 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:36.493 09:38:36 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:24:36.493 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:24:36.752 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:24:37.011 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:37.270 [2024-07-25 09:38:37.717403] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:24:37.270 [2024-07-25 09:38:37.756315] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:37.270 [2024-07-25 09:38:37.757474] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:24:37.270 [2024-07-25 09:38:37.764280] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:37.270 [2024-07-25 09:38:37.764603] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:24:37.270 [2024-07-25 09:38:37.764619] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:37.270 [2024-07-25 09:38:37.778374] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:24:37.270 [2024-07-25 09:38:37.812741] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:37.270 [2024-07-25 09:38:37.817554] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:24:37.270 [2024-07-25 09:38:37.825347] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:37.270 [2024-07-25 09:38:37.825643] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:24:37.270 [2024-07-25 09:38:37.825657] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.270 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:37.270 [2024-07-25 09:38:37.837356] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:24:37.270 [2024-07-25 09:38:37.872316] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:37.270 [2024-07-25 09:38:37.873386] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:24:37.270 [2024-07-25 09:38:37.876443] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:37.270 [2024-07-25 09:38:37.876727] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:24:37.270 [2024-07-25 09:38:37.876742] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:37.530 [2024-07-25 09:38:37.895366] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:24:37.530 [2024-07-25 09:38:37.926301] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:24:37.530 [2024-07-25 09:38:37.927255] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:24:37.530 [2024-07-25 09:38:37.934258] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:24:37.530 [2024-07-25 09:38:37.934527] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:24:37.530 [2024-07-25 09:38:37.934541] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.530 09:38:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:24:37.530 [2024-07-25 09:38:38.116375] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:24:37.530 [2024-07-25 09:38:38.123274] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:24:37.530 [2024-07-25 09:38:38.123323] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:24:37.530 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:24:37.790 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:37.790 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:37.790 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.790 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:38.049 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.049 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:38.049 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:38.049 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.049 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:38.308 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.308 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:38.308 09:38:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:24:38.308 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.308 09:38:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:38.876 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.876 09:38:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:24:38.876 09:38:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:24:38.876 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.876 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:39.135 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.135 09:38:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:24:39.135 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:24:39.135 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.135 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:24:39.136 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:24:39.395 ************************************ 00:24:39.395 END TEST test_create_multi_ublk 00:24:39.395 ************************************ 00:24:39.395 09:38:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:24:39.395 00:24:39.395 real 0m4.385s 00:24:39.395 user 0m1.046s 00:24:39.395 sys 0m0.192s 00:24:39.395 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:39.395 09:38:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:24:39.395 09:38:39 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:39.395 09:38:39 ublk -- ublk/ublk.sh@147 -- # cleanup 00:24:39.395 09:38:39 ublk -- ublk/ublk.sh@130 -- # killprocess 77196 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@950 -- # '[' -z 77196 ']' 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@954 -- # kill -0 77196 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@955 -- # uname 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77196 00:24:39.395 killing process with pid 77196 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77196' 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@969 -- # kill 77196 00:24:39.395 09:38:39 ublk -- common/autotest_common.sh@974 -- # wait 77196 00:24:40.773 [2024-07-25 09:38:40.978602] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:24:40.773 [2024-07-25 09:38:40.978661] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:24:41.737 00:24:41.737 real 0m29.481s 00:24:41.737 user 0m43.993s 00:24:41.737 sys 0m8.199s 00:24:41.737 09:38:42 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.737 09:38:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:24:41.737 ************************************ 00:24:41.737 END TEST ublk 00:24:41.737 ************************************ 00:24:41.737 09:38:42 -- spdk/autotest.sh@256 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:41.737 09:38:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:41.737 09:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.737 09:38:42 -- common/autotest_common.sh@10 -- # set +x 00:24:41.737 ************************************ 00:24:41.737 START TEST ublk_recovery 00:24:41.737 ************************************ 00:24:41.737 09:38:42 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:24:42.006 * Looking for test storage... 00:24:42.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:24:42.006 09:38:42 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:24:42.006 09:38:42 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:24:42.006 09:38:42 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:24:42.006 09:38:42 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=77600 00:24:42.006 09:38:42 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:42.006 09:38:42 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.006 09:38:42 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 77600 00:24:42.006 09:38:42 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77600 ']' 00:24:42.006 09:38:42 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.006 09:38:42 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.006 09:38:42 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.006 09:38:42 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.006 09:38:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.006 [2024-07-25 09:38:42.554361] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:42.006 [2024-07-25 09:38:42.554477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77600 ] 00:24:42.266 [2024-07-25 09:38:42.718677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:42.525 [2024-07-25 09:38:42.941326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.525 [2024-07-25 09:38:42.941362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:24:43.463 09:38:43 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.463 [2024-07-25 09:38:43.852246] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:43.463 [2024-07-25 09:38:43.855182] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.463 09:38:43 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.463 09:38:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.463 malloc0 00:24:43.463 09:38:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.463 09:38:44 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:24:43.463 09:38:44 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:43.463 09:38:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.463 [2024-07-25 09:38:44.024411] ublk.c:1890:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:24:43.463 [2024-07-25 09:38:44.024515] ublk.c:1931:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:24:43.463 [2024-07-25 09:38:44.024525] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:43.463 [2024-07-25 09:38:44.024533] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:24:43.463 [2024-07-25 09:38:44.033337] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:24:43.463 [2024-07-25 09:38:44.033366] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:24:43.463 [2024-07-25 09:38:44.039278] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:24:43.463 [2024-07-25 09:38:44.039422] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:24:43.463 [2024-07-25 09:38:44.048260] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:24:43.463 1 00:24:43.463 09:38:44 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:43.463 09:38:44 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:24:44.843 09:38:45 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=77635 00:24:44.843 09:38:45 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:24:44.843 09:38:45 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:24:44.843 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:44.843 fio-3.35 00:24:44.843 Starting 1 process 00:24:50.116 09:38:50 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 77600 00:24:50.116 09:38:50 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:24:55.394 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 77600 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:24:55.394 09:38:55 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=77745 00:24:55.394 09:38:55 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:24:55.394 09:38:55 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.394 09:38:55 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 77745 00:24:55.394 09:38:55 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 77745 ']' 00:24:55.394 09:38:55 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.394 09:38:55 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:55.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.394 09:38:55 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.394 09:38:55 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:55.394 09:38:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.394 [2024-07-25 09:38:55.175381] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:55.394 [2024-07-25 09:38:55.175502] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77745 ] 00:24:55.394 [2024-07-25 09:38:55.333119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:55.394 [2024-07-25 09:38:55.570095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.394 [2024-07-25 09:38:55.570128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:24:55.964 09:38:56 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:55.964 [2024-07-25 09:38:56.494247] ublk.c: 538:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:24:55.964 [2024-07-25 09:38:56.497281] ublk.c: 724:ublk_create_target: *NOTICE*: UBLK target created successfully 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.964 09:38:56 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.964 09:38:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 malloc0 00:24:56.224 09:38:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 09:38:56 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:24:56.224 09:38:56 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 09:38:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 [2024-07-25 09:38:56.668421] ublk.c:2077:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:24:56.224 [2024-07-25 09:38:56.668471] ublk.c: 937:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:24:56.224 [2024-07-25 09:38:56.668481] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:24:56.224 [2024-07-25 09:38:56.676299] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:24:56.224 [2024-07-25 09:38:56.676322] ublk.c:2006:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:24:56.224 [2024-07-25 09:38:56.676404] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:24:56.224 1 00:24:56.224 09:38:56 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 09:38:56 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 77635 00:25:22.800 [2024-07-25 09:39:20.378262] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:25:22.800 [2024-07-25 09:39:20.388952] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:25:22.800 [2024-07-25 09:39:20.396574] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:25:22.800 [2024-07-25 09:39:20.396606] ublk.c: 379:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:25:44.774 00:25:44.774 fio_test: (groupid=0, jobs=1): err= 0: pid=77643: Thu Jul 25 09:39:45 2024 00:25:44.774 read: IOPS=10.3k, BW=40.1MiB/s (42.0MB/s)(2405MiB/60002msec) 00:25:44.774 slat (nsec): min=1038, max=257830, avg=9234.74, stdev=2987.01 00:25:44.774 clat (usec): min=1446, max=30334k, avg=6080.46, stdev=304377.78 00:25:44.774 lat (usec): min=1457, max=30334k, avg=6089.69, stdev=304377.75 00:25:44.774 clat percentiles (msec): 00:25:44.774 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:25:44.774 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:25:44.774 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:44.774 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 11], 00:25:44.774 | 99.99th=[17113] 00:25:44.774 bw ( KiB/s): min=33176, max=109752, per=100.00%, avg=82266.47, stdev=11714.35, samples=59 00:25:44.774 iops : min= 8294, max=27438, avg=20566.58, stdev=2928.64, samples=59 00:25:44.774 write: IOPS=10.2k, BW=40.0MiB/s (42.0MB/s)(2402MiB/60002msec); 0 zone resets 00:25:44.774 slat (nsec): min=1106, max=271470, avg=9385.74, stdev=3125.65 00:25:44.774 clat (usec): min=1363, max=30335k, avg=6381.78, stdev=314206.05 00:25:44.774 lat (usec): min=1370, max=30335k, avg=6391.17, stdev=314206.02 00:25:44.774 clat percentiles (msec): 00:25:44.774 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:25:44.774 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 4], 60.00th=[ 4], 00:25:44.774 | 70.00th=[ 4], 80.00th=[ 4], 90.00th=[ 4], 95.00th=[ 5], 00:25:44.774 | 99.00th=[ 7], 99.50th=[ 8], 99.90th=[ 9], 99.95th=[ 11], 00:25:44.774 | 99.99th=[17113] 00:25:44.774 bw ( KiB/s): min=33816, max=109256, per=100.00%, avg=82165.92, stdev=11524.67, samples=59 00:25:44.774 iops : min= 8454, max=27314, avg=20541.41, stdev=2881.21, samples=59 00:25:44.774 lat (msec) : 2=0.37%, 4=94.06%, 10=5.51%, 20=0.05%, >=2000=0.01% 00:25:44.774 cpu : usr=5.40%, sys=19.13%, ctx=55979, majf=0, minf=13 00:25:44.774 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:25:44.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:44.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:44.774 issued rwts: total=615576,614939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:44.774 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:44.774 00:25:44.774 Run status group 0 (all jobs): 00:25:44.774 READ: bw=40.1MiB/s (42.0MB/s), 40.1MiB/s-40.1MiB/s (42.0MB/s-42.0MB/s), io=2405MiB (2521MB), run=60002-60002msec 00:25:44.774 WRITE: bw=40.0MiB/s (42.0MB/s), 40.0MiB/s-40.0MiB/s (42.0MB/s-42.0MB/s), io=2402MiB (2519MB), run=60002-60002msec 00:25:44.774 00:25:44.774 Disk stats (read/write): 00:25:44.774 ublkb1: ios=613255/612698, merge=0/0, ticks=3675069/3783309, in_queue=7458378, util=99.97% 00:25:44.774 09:39:45 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:25:44.774 09:39:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.774 09:39:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:44.774 [2024-07-25 09:39:45.335441] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:25:44.774 [2024-07-25 09:39:45.374319] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:25:44.774 [2024-07-25 09:39:45.378300] ublk.c: 435:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:25:45.034 [2024-07-25 09:39:45.389287] ublk.c: 329:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:25:45.034 [2024-07-25 09:39:45.389477] ublk.c: 951:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:25:45.034 [2024-07-25 09:39:45.389497] ublk.c:1785:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.034 09:39:45 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.034 [2024-07-25 09:39:45.395367] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:25:45.034 [2024-07-25 09:39:45.406110] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:25:45.034 [2024-07-25 09:39:45.406158] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.034 09:39:45 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:25:45.034 09:39:45 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:25:45.034 09:39:45 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 77745 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 77745 ']' 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 77745 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77745 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:45.034 killing process with pid 77745 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77745' 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@969 -- # kill 77745 00:25:45.034 09:39:45 ublk_recovery -- common/autotest_common.sh@974 -- # wait 77745 00:25:45.997 [2024-07-25 09:39:46.572189] ublk.c: 801:_ublk_fini: *DEBUG*: finish shutdown 00:25:45.997 [2024-07-25 09:39:46.572273] ublk.c: 732:_ublk_fini_done: *DEBUG*: 00:25:47.902 00:25:47.902 real 1m5.676s 00:25:47.902 user 1m53.515s 00:25:47.902 sys 0m21.483s 00:25:47.902 09:39:48 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:47.902 09:39:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.902 ************************************ 00:25:47.902 END TEST ublk_recovery 00:25:47.902 ************************************ 00:25:47.902 09:39:48 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@264 -- # timing_exit lib 00:25:47.902 09:39:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.902 09:39:48 -- common/autotest_common.sh@10 -- # set +x 00:25:47.902 09:39:48 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@283 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@343 -- # '[' 1 -eq 1 ']' 00:25:47.902 09:39:48 -- spdk/autotest.sh@344 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:47.902 09:39:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:47.902 09:39:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.902 09:39:48 -- common/autotest_common.sh@10 -- # set +x 00:25:47.902 ************************************ 00:25:47.902 START TEST ftl 00:25:47.902 ************************************ 00:25:47.902 09:39:48 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:47.902 * Looking for test storage... 00:25:47.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:47.902 09:39:48 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:25:47.902 09:39:48 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.902 09:39:48 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:47.902 09:39:48 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:47.902 09:39:48 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:47.902 09:39:48 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.902 09:39:48 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:47.902 09:39:48 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:47.902 09:39:48 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.902 09:39:48 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.902 09:39:48 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:47.902 09:39:48 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:47.902 09:39:48 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.902 09:39:48 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:47.902 09:39:48 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:47.902 09:39:48 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:47.902 09:39:48 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.902 09:39:48 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:47.902 09:39:48 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:47.902 09:39:48 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:47.902 09:39:48 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:47.902 09:39:48 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:47.902 09:39:48 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.902 09:39:48 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:47.902 09:39:48 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:47.902 09:39:48 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:47.902 09:39:48 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.902 09:39:48 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:25:47.902 09:39:48 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:48.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:48.421 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:48.421 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:48.421 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:48.421 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:48.680 09:39:49 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=78543 00:25:48.680 09:39:49 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:25:48.680 09:39:49 ftl -- ftl/ftl.sh@38 -- # waitforlisten 78543 00:25:48.680 09:39:49 ftl -- common/autotest_common.sh@831 -- # '[' -z 78543 ']' 00:25:48.680 09:39:49 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:48.680 09:39:49 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:48.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:48.680 09:39:49 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:48.680 09:39:49 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:48.680 09:39:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:48.680 [2024-07-25 09:39:49.151899] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:48.680 [2024-07-25 09:39:49.152014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78543 ] 00:25:48.939 [2024-07-25 09:39:49.312750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.939 [2024-07-25 09:39:49.548185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.508 09:39:49 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:49.508 09:39:49 ftl -- common/autotest_common.sh@864 -- # return 0 00:25:49.508 09:39:49 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:25:49.508 09:39:50 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:50.886 09:39:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:25:50.886 09:39:51 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@50 -- # break 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:25:51.146 09:39:51 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:25:51.406 09:39:51 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:25:51.406 09:39:51 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:25:51.406 09:39:51 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:25:51.406 09:39:51 ftl -- ftl/ftl.sh@63 -- # break 00:25:51.406 09:39:51 ftl -- ftl/ftl.sh@66 -- # killprocess 78543 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@950 -- # '[' -z 78543 ']' 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@954 -- # kill -0 78543 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@955 -- # uname 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78543 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:51.406 killing process with pid 78543 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78543' 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@969 -- # kill 78543 00:25:51.406 09:39:51 ftl -- common/autotest_common.sh@974 -- # wait 78543 00:25:53.949 09:39:54 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:25:53.949 09:39:54 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:53.949 09:39:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:25:53.949 09:39:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:53.949 09:39:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:53.949 ************************************ 00:25:53.949 START TEST ftl_fio_basic 00:25:53.949 ************************************ 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:25:53.949 * Looking for test storage... 00:25:53.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:25:53.949 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=78678 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 78678 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 78678 ']' 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:53.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:53.950 09:39:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:25:54.209 [2024-07-25 09:39:54.606528] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:54.209 [2024-07-25 09:39:54.606661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78678 ] 00:25:54.209 [2024-07-25 09:39:54.770398] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:54.468 [2024-07-25 09:39:54.995766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.468 [2024-07-25 09:39:54.995909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:54.468 [2024-07-25 09:39:54.995946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:25:55.406 09:39:55 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:25:55.666 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:55.925 { 00:25:55.925 "name": "nvme0n1", 00:25:55.925 "aliases": [ 00:25:55.925 "329950b4-969e-4217-a3f4-f3901d2f10ba" 00:25:55.925 ], 00:25:55.925 "product_name": "NVMe disk", 00:25:55.925 "block_size": 4096, 00:25:55.925 "num_blocks": 1310720, 00:25:55.925 "uuid": "329950b4-969e-4217-a3f4-f3901d2f10ba", 00:25:55.925 "assigned_rate_limits": { 00:25:55.925 "rw_ios_per_sec": 0, 00:25:55.925 "rw_mbytes_per_sec": 0, 00:25:55.925 "r_mbytes_per_sec": 0, 00:25:55.925 "w_mbytes_per_sec": 0 00:25:55.925 }, 00:25:55.925 "claimed": false, 00:25:55.925 "zoned": false, 00:25:55.925 "supported_io_types": { 00:25:55.925 "read": true, 00:25:55.925 "write": true, 00:25:55.925 "unmap": true, 00:25:55.925 "flush": true, 00:25:55.925 "reset": true, 00:25:55.925 "nvme_admin": true, 00:25:55.925 "nvme_io": true, 00:25:55.925 "nvme_io_md": false, 00:25:55.925 "write_zeroes": true, 00:25:55.925 "zcopy": false, 00:25:55.925 "get_zone_info": false, 00:25:55.925 "zone_management": false, 00:25:55.925 "zone_append": false, 00:25:55.925 "compare": true, 00:25:55.925 "compare_and_write": false, 00:25:55.925 "abort": true, 00:25:55.925 "seek_hole": false, 00:25:55.925 "seek_data": false, 00:25:55.925 "copy": true, 00:25:55.925 "nvme_iov_md": false 00:25:55.925 }, 00:25:55.925 "driver_specific": { 00:25:55.925 "nvme": [ 00:25:55.925 { 00:25:55.925 "pci_address": "0000:00:11.0", 00:25:55.925 "trid": { 00:25:55.925 "trtype": "PCIe", 00:25:55.925 "traddr": "0000:00:11.0" 00:25:55.925 }, 00:25:55.925 "ctrlr_data": { 00:25:55.925 "cntlid": 0, 00:25:55.925 "vendor_id": "0x1b36", 00:25:55.925 "model_number": "QEMU NVMe Ctrl", 00:25:55.925 "serial_number": "12341", 00:25:55.925 "firmware_revision": "8.0.0", 00:25:55.925 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:55.925 "oacs": { 00:25:55.925 "security": 0, 00:25:55.925 "format": 1, 00:25:55.925 "firmware": 0, 00:25:55.925 "ns_manage": 1 00:25:55.925 }, 00:25:55.925 "multi_ctrlr": false, 00:25:55.925 "ana_reporting": false 00:25:55.925 }, 00:25:55.925 "vs": { 00:25:55.925 "nvme_version": "1.4" 00:25:55.925 }, 00:25:55.925 "ns_data": { 00:25:55.925 "id": 1, 00:25:55.925 "can_share": false 00:25:55.925 } 00:25:55.925 } 00:25:55.925 ], 00:25:55.925 "mp_policy": "active_passive" 00:25:55.925 } 00:25:55.925 } 00:25:55.925 ]' 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:55.925 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:56.184 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:25:56.184 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:56.184 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=79228bec-1ca3-4552-b380-4c1e7a347206 00:25:56.184 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 79228bec-1ca3-4552-b380-4c1e7a347206 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:25:56.443 09:39:56 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:56.703 { 00:25:56.703 "name": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:25:56.703 "aliases": [ 00:25:56.703 "lvs/nvme0n1p0" 00:25:56.703 ], 00:25:56.703 "product_name": "Logical Volume", 00:25:56.703 "block_size": 4096, 00:25:56.703 "num_blocks": 26476544, 00:25:56.703 "uuid": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:25:56.703 "assigned_rate_limits": { 00:25:56.703 "rw_ios_per_sec": 0, 00:25:56.703 "rw_mbytes_per_sec": 0, 00:25:56.703 "r_mbytes_per_sec": 0, 00:25:56.703 "w_mbytes_per_sec": 0 00:25:56.703 }, 00:25:56.703 "claimed": false, 00:25:56.703 "zoned": false, 00:25:56.703 "supported_io_types": { 00:25:56.703 "read": true, 00:25:56.703 "write": true, 00:25:56.703 "unmap": true, 00:25:56.703 "flush": false, 00:25:56.703 "reset": true, 00:25:56.703 "nvme_admin": false, 00:25:56.703 "nvme_io": false, 00:25:56.703 "nvme_io_md": false, 00:25:56.703 "write_zeroes": true, 00:25:56.703 "zcopy": false, 00:25:56.703 "get_zone_info": false, 00:25:56.703 "zone_management": false, 00:25:56.703 "zone_append": false, 00:25:56.703 "compare": false, 00:25:56.703 "compare_and_write": false, 00:25:56.703 "abort": false, 00:25:56.703 "seek_hole": true, 00:25:56.703 "seek_data": true, 00:25:56.703 "copy": false, 00:25:56.703 "nvme_iov_md": false 00:25:56.703 }, 00:25:56.703 "driver_specific": { 00:25:56.703 "lvol": { 00:25:56.703 "lvol_store_uuid": "79228bec-1ca3-4552-b380-4c1e7a347206", 00:25:56.703 "base_bdev": "nvme0n1", 00:25:56.703 "thin_provision": true, 00:25:56.703 "num_allocated_clusters": 0, 00:25:56.703 "snapshot": false, 00:25:56.703 "clone": false, 00:25:56.703 "esnap_clone": false 00:25:56.703 } 00:25:56.703 } 00:25:56.703 } 00:25:56.703 ]' 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:25:56.703 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:25:56.962 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:57.221 { 00:25:57.221 "name": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:25:57.221 "aliases": [ 00:25:57.221 "lvs/nvme0n1p0" 00:25:57.221 ], 00:25:57.221 "product_name": "Logical Volume", 00:25:57.221 "block_size": 4096, 00:25:57.221 "num_blocks": 26476544, 00:25:57.221 "uuid": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:25:57.221 "assigned_rate_limits": { 00:25:57.221 "rw_ios_per_sec": 0, 00:25:57.221 "rw_mbytes_per_sec": 0, 00:25:57.221 "r_mbytes_per_sec": 0, 00:25:57.221 "w_mbytes_per_sec": 0 00:25:57.221 }, 00:25:57.221 "claimed": false, 00:25:57.221 "zoned": false, 00:25:57.221 "supported_io_types": { 00:25:57.221 "read": true, 00:25:57.221 "write": true, 00:25:57.221 "unmap": true, 00:25:57.221 "flush": false, 00:25:57.221 "reset": true, 00:25:57.221 "nvme_admin": false, 00:25:57.221 "nvme_io": false, 00:25:57.221 "nvme_io_md": false, 00:25:57.221 "write_zeroes": true, 00:25:57.221 "zcopy": false, 00:25:57.221 "get_zone_info": false, 00:25:57.221 "zone_management": false, 00:25:57.221 "zone_append": false, 00:25:57.221 "compare": false, 00:25:57.221 "compare_and_write": false, 00:25:57.221 "abort": false, 00:25:57.221 "seek_hole": true, 00:25:57.221 "seek_data": true, 00:25:57.221 "copy": false, 00:25:57.221 "nvme_iov_md": false 00:25:57.221 }, 00:25:57.221 "driver_specific": { 00:25:57.221 "lvol": { 00:25:57.221 "lvol_store_uuid": "79228bec-1ca3-4552-b380-4c1e7a347206", 00:25:57.221 "base_bdev": "nvme0n1", 00:25:57.221 "thin_provision": true, 00:25:57.221 "num_allocated_clusters": 0, 00:25:57.221 "snapshot": false, 00:25:57.221 "clone": false, 00:25:57.221 "esnap_clone": false 00:25:57.221 } 00:25:57.221 } 00:25:57.221 } 00:25:57.221 ]' 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:25:57.221 09:39:57 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:25:57.480 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:25:57.480 09:39:57 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:25:57.480 { 00:25:57.480 "name": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:25:57.480 "aliases": [ 00:25:57.480 "lvs/nvme0n1p0" 00:25:57.480 ], 00:25:57.480 "product_name": "Logical Volume", 00:25:57.480 "block_size": 4096, 00:25:57.480 "num_blocks": 26476544, 00:25:57.480 "uuid": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:25:57.480 "assigned_rate_limits": { 00:25:57.480 "rw_ios_per_sec": 0, 00:25:57.480 "rw_mbytes_per_sec": 0, 00:25:57.480 "r_mbytes_per_sec": 0, 00:25:57.480 "w_mbytes_per_sec": 0 00:25:57.480 }, 00:25:57.480 "claimed": false, 00:25:57.480 "zoned": false, 00:25:57.480 "supported_io_types": { 00:25:57.480 "read": true, 00:25:57.480 "write": true, 00:25:57.480 "unmap": true, 00:25:57.480 "flush": false, 00:25:57.480 "reset": true, 00:25:57.480 "nvme_admin": false, 00:25:57.480 "nvme_io": false, 00:25:57.480 "nvme_io_md": false, 00:25:57.480 "write_zeroes": true, 00:25:57.480 "zcopy": false, 00:25:57.480 "get_zone_info": false, 00:25:57.480 "zone_management": false, 00:25:57.480 "zone_append": false, 00:25:57.480 "compare": false, 00:25:57.480 "compare_and_write": false, 00:25:57.480 "abort": false, 00:25:57.480 "seek_hole": true, 00:25:57.480 "seek_data": true, 00:25:57.480 "copy": false, 00:25:57.480 "nvme_iov_md": false 00:25:57.480 }, 00:25:57.480 "driver_specific": { 00:25:57.480 "lvol": { 00:25:57.480 "lvol_store_uuid": "79228bec-1ca3-4552-b380-4c1e7a347206", 00:25:57.480 "base_bdev": "nvme0n1", 00:25:57.480 "thin_provision": true, 00:25:57.480 "num_allocated_clusters": 0, 00:25:57.480 "snapshot": false, 00:25:57.480 "clone": false, 00:25:57.480 "esnap_clone": false 00:25:57.480 } 00:25:57.480 } 00:25:57.480 } 00:25:57.480 ]' 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:25:57.480 09:39:58 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8a14c9b6-606f-4a7f-bb97-80adec27d7b4 -c nvc0n1p0 --l2p_dram_limit 60 00:25:57.739 [2024-07-25 09:39:58.252084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.252138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.739 [2024-07-25 09:39:58.252154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.739 [2024-07-25 09:39:58.252163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.252266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.252278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.739 [2024-07-25 09:39:58.252286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:25:57.739 [2024-07-25 09:39:58.252295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.252341] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.739 [2024-07-25 09:39:58.253438] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.739 [2024-07-25 09:39:58.253463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.253476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.739 [2024-07-25 09:39:58.253484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:25:57.739 [2024-07-25 09:39:58.253493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.253551] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4564d887-3084-4e80-90ea-54b5014dec23 00:25:57.739 [2024-07-25 09:39:58.255000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.255029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:57.739 [2024-07-25 09:39:58.255041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:57.739 [2024-07-25 09:39:58.255049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.262645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.262677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.739 [2024-07-25 09:39:58.262707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.461 ms 00:25:57.739 [2024-07-25 09:39:58.262715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.262866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.262881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.739 [2024-07-25 09:39:58.262890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:57.739 [2024-07-25 09:39:58.262898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.263022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.263033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.739 [2024-07-25 09:39:58.263043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:57.739 [2024-07-25 09:39:58.263052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.263104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.739 [2024-07-25 09:39:58.268434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.268470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.739 [2024-07-25 09:39:58.268480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.350 ms 00:25:57.739 [2024-07-25 09:39:58.268488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.268561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.268571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.739 [2024-07-25 09:39:58.268579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:57.739 [2024-07-25 09:39:58.268587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.268652] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:57.739 [2024-07-25 09:39:58.268788] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.739 [2024-07-25 09:39:58.268808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.739 [2024-07-25 09:39:58.268823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:57.739 [2024-07-25 09:39:58.268833] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.739 [2024-07-25 09:39:58.268843] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.739 [2024-07-25 09:39:58.268851] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.739 [2024-07-25 09:39:58.268859] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.739 [2024-07-25 09:39:58.268868] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.739 [2024-07-25 09:39:58.268876] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.739 [2024-07-25 09:39:58.268885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.268893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.739 [2024-07-25 09:39:58.268901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:25:57.739 [2024-07-25 09:39:58.268910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.269006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.739 [2024-07-25 09:39:58.269029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.739 [2024-07-25 09:39:58.269036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:57.739 [2024-07-25 09:39:58.269044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.739 [2024-07-25 09:39:58.269198] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.739 [2024-07-25 09:39:58.269221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.739 [2024-07-25 09:39:58.269239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269249] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.739 [2024-07-25 09:39:58.269265] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269280] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.739 [2024-07-25 09:39:58.269286] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269295] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.739 [2024-07-25 09:39:58.269304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.739 [2024-07-25 09:39:58.269312] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.739 [2024-07-25 09:39:58.269319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.739 [2024-07-25 09:39:58.269326] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.739 [2024-07-25 09:39:58.269333] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.739 [2024-07-25 09:39:58.269341] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.739 [2024-07-25 09:39:58.269357] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269364] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269372] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.739 [2024-07-25 09:39:58.269378] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269385] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269392] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.739 [2024-07-25 09:39:58.269399] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269406] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.739 [2024-07-25 09:39:58.269419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269427] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.739 [2024-07-25 09:39:58.269442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.739 [2024-07-25 09:39:58.269455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.739 [2024-07-25 09:39:58.269462] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.739 [2024-07-25 09:39:58.269471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.739 [2024-07-25 09:39:58.269478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.739 [2024-07-25 09:39:58.269486] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.739 [2024-07-25 09:39:58.269492] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.740 [2024-07-25 09:39:58.269500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.740 [2024-07-25 09:39:58.269506] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.740 [2024-07-25 09:39:58.269514] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.740 [2024-07-25 09:39:58.269520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.740 [2024-07-25 09:39:58.269528] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.740 [2024-07-25 09:39:58.269535] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.740 [2024-07-25 09:39:58.269543] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.740 [2024-07-25 09:39:58.269552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.740 [2024-07-25 09:39:58.269576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.740 [2024-07-25 09:39:58.269583] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.740 [2024-07-25 09:39:58.269591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.740 [2024-07-25 09:39:58.269598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.740 [2024-07-25 09:39:58.269608] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.740 [2024-07-25 09:39:58.269615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.740 [2024-07-25 09:39:58.269622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.740 [2024-07-25 09:39:58.269628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.740 [2024-07-25 09:39:58.269640] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.740 [2024-07-25 09:39:58.269649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.740 [2024-07-25 09:39:58.269667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.740 [2024-07-25 09:39:58.269677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.740 [2024-07-25 09:39:58.269684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.740 [2024-07-25 09:39:58.269692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.740 [2024-07-25 09:39:58.269698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.740 [2024-07-25 09:39:58.269706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.740 [2024-07-25 09:39:58.269713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.740 [2024-07-25 09:39:58.269721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.740 [2024-07-25 09:39:58.269728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.740 [2024-07-25 09:39:58.269766] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.740 [2024-07-25 09:39:58.269776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.740 [2024-07-25 09:39:58.269792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.740 [2024-07-25 09:39:58.269800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.740 [2024-07-25 09:39:58.269808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.740 [2024-07-25 09:39:58.269818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.740 [2024-07-25 09:39:58.269827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.740 [2024-07-25 09:39:58.269836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:25:57.740 [2024-07-25 09:39:58.269843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.740 [2024-07-25 09:39:58.269983] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:57.740 [2024-07-25 09:39:58.269998] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:01.949 [2024-07-25 09:40:02.032096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.032173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:01.949 [2024-07-25 09:40:02.032196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3769.349 ms 00:26:01.949 [2024-07-25 09:40:02.032207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.083057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.083118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:01.949 [2024-07-25 09:40:02.083139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.597 ms 00:26:01.949 [2024-07-25 09:40:02.083150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.083389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.083407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:01.949 [2024-07-25 09:40:02.083420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:01.949 [2024-07-25 09:40:02.083435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.152609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.152670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:01.949 [2024-07-25 09:40:02.152693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.232 ms 00:26:01.949 [2024-07-25 09:40:02.152705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.152772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.152785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:01.949 [2024-07-25 09:40:02.152804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:01.949 [2024-07-25 09:40:02.152817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.153787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.153824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:01.949 [2024-07-25 09:40:02.153841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.822 ms 00:26:01.949 [2024-07-25 09:40:02.153854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.154018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.154042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:01.949 [2024-07-25 09:40:02.154058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:26:01.949 [2024-07-25 09:40:02.154071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.185276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.185319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:01.949 [2024-07-25 09:40:02.185336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.219 ms 00:26:01.949 [2024-07-25 09:40:02.185346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.200455] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:26:01.949 [2024-07-25 09:40:02.227888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.227953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:01.949 [2024-07-25 09:40:02.227971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.485 ms 00:26:01.949 [2024-07-25 09:40:02.227984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.322265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.322330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:01.949 [2024-07-25 09:40:02.322346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.396 ms 00:26:01.949 [2024-07-25 09:40:02.322358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.949 [2024-07-25 09:40:02.322623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.949 [2024-07-25 09:40:02.322641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:01.949 [2024-07-25 09:40:02.322653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:26:01.949 [2024-07-25 09:40:02.322668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.950 [2024-07-25 09:40:02.361861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.950 [2024-07-25 09:40:02.361932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:01.950 [2024-07-25 09:40:02.361947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.207 ms 00:26:01.950 [2024-07-25 09:40:02.361960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.950 [2024-07-25 09:40:02.399133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.950 [2024-07-25 09:40:02.399181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:01.950 [2024-07-25 09:40:02.399196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.186 ms 00:26:01.950 [2024-07-25 09:40:02.399208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.950 [2024-07-25 09:40:02.400037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.950 [2024-07-25 09:40:02.400070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:01.950 [2024-07-25 09:40:02.400082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:26:01.950 [2024-07-25 09:40:02.400094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.950 [2024-07-25 09:40:02.518466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.950 [2024-07-25 09:40:02.518531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:01.950 [2024-07-25 09:40:02.518546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.529 ms 00:26:01.950 [2024-07-25 09:40:02.518561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:01.950 [2024-07-25 09:40:02.558185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:01.950 [2024-07-25 09:40:02.558246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:01.950 [2024-07-25 09:40:02.558261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.644 ms 00:26:01.950 [2024-07-25 09:40:02.558273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.209 [2024-07-25 09:40:02.597247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.209 [2024-07-25 09:40:02.597298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:02.209 [2024-07-25 09:40:02.597311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.990 ms 00:26:02.209 [2024-07-25 09:40:02.597322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.209 [2024-07-25 09:40:02.650154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.209 [2024-07-25 09:40:02.650226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:02.209 [2024-07-25 09:40:02.650242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.878 ms 00:26:02.209 [2024-07-25 09:40:02.650274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.209 [2024-07-25 09:40:02.650352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.209 [2024-07-25 09:40:02.650363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:02.209 [2024-07-25 09:40:02.650372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:02.209 [2024-07-25 09:40:02.650385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.209 [2024-07-25 09:40:02.650549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.209 [2024-07-25 09:40:02.650572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:02.209 [2024-07-25 09:40:02.650581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:26:02.209 [2024-07-25 09:40:02.650591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.209 [2024-07-25 09:40:02.651933] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4407.791 ms, result 0 00:26:02.209 { 00:26:02.209 "name": "ftl0", 00:26:02.209 "uuid": "4564d887-3084-4e80-90ea-54b5014dec23" 00:26:02.209 } 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:26:02.209 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:02.468 09:40:02 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:26:02.468 [ 00:26:02.468 { 00:26:02.468 "name": "ftl0", 00:26:02.468 "aliases": [ 00:26:02.468 "4564d887-3084-4e80-90ea-54b5014dec23" 00:26:02.468 ], 00:26:02.468 "product_name": "FTL disk", 00:26:02.468 "block_size": 4096, 00:26:02.468 "num_blocks": 20971520, 00:26:02.468 "uuid": "4564d887-3084-4e80-90ea-54b5014dec23", 00:26:02.468 "assigned_rate_limits": { 00:26:02.468 "rw_ios_per_sec": 0, 00:26:02.468 "rw_mbytes_per_sec": 0, 00:26:02.468 "r_mbytes_per_sec": 0, 00:26:02.468 "w_mbytes_per_sec": 0 00:26:02.468 }, 00:26:02.468 "claimed": false, 00:26:02.468 "zoned": false, 00:26:02.468 "supported_io_types": { 00:26:02.468 "read": true, 00:26:02.468 "write": true, 00:26:02.468 "unmap": true, 00:26:02.468 "flush": true, 00:26:02.468 "reset": false, 00:26:02.468 "nvme_admin": false, 00:26:02.468 "nvme_io": false, 00:26:02.468 "nvme_io_md": false, 00:26:02.468 "write_zeroes": true, 00:26:02.468 "zcopy": false, 00:26:02.468 "get_zone_info": false, 00:26:02.468 "zone_management": false, 00:26:02.468 "zone_append": false, 00:26:02.468 "compare": false, 00:26:02.468 "compare_and_write": false, 00:26:02.468 "abort": false, 00:26:02.468 "seek_hole": false, 00:26:02.468 "seek_data": false, 00:26:02.468 "copy": false, 00:26:02.468 "nvme_iov_md": false 00:26:02.468 }, 00:26:02.468 "driver_specific": { 00:26:02.468 "ftl": { 00:26:02.468 "base_bdev": "8a14c9b6-606f-4a7f-bb97-80adec27d7b4", 00:26:02.468 "cache": "nvc0n1p0" 00:26:02.468 } 00:26:02.468 } 00:26:02.468 } 00:26:02.468 ] 00:26:02.468 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:26:02.468 09:40:03 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:26:02.468 09:40:03 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:02.727 09:40:03 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:26:02.727 09:40:03 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:02.988 [2024-07-25 09:40:03.376708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.376759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:02.988 [2024-07-25 09:40:03.376777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:02.988 [2024-07-25 09:40:03.376785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.376844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:02.988 [2024-07-25 09:40:03.380627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.380661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:02.988 [2024-07-25 09:40:03.380672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:26:02.988 [2024-07-25 09:40:03.380681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.381600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.381626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:02.988 [2024-07-25 09:40:03.381636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:26:02.988 [2024-07-25 09:40:03.381648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.384119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.384139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:02.988 [2024-07-25 09:40:03.384147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:26:02.988 [2024-07-25 09:40:03.384156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.389054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.389083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:02.988 [2024-07-25 09:40:03.389092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.862 ms 00:26:02.988 [2024-07-25 09:40:03.389121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.425469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.425507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:02.988 [2024-07-25 09:40:03.425518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.298 ms 00:26:02.988 [2024-07-25 09:40:03.425528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.447849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.447891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:02.988 [2024-07-25 09:40:03.447918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.298 ms 00:26:02.988 [2024-07-25 09:40:03.447928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.448250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.448265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:02.988 [2024-07-25 09:40:03.448274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:26:02.988 [2024-07-25 09:40:03.448283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.485487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.485522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:02.988 [2024-07-25 09:40:03.485547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.231 ms 00:26:02.988 [2024-07-25 09:40:03.485556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.521227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.521264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:02.988 [2024-07-25 09:40:03.521289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.672 ms 00:26:02.988 [2024-07-25 09:40:03.521298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.557446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.557481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:02.988 [2024-07-25 09:40:03.557506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.158 ms 00:26:02.988 [2024-07-25 09:40:03.557515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.591782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.988 [2024-07-25 09:40:03.591817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:02.988 [2024-07-25 09:40:03.591827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.155 ms 00:26:02.988 [2024-07-25 09:40:03.591835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:02.988 [2024-07-25 09:40:03.591896] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:02.988 [2024-07-25 09:40:03.591911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.591995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:02.988 [2024-07-25 09:40:03.592124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:02.989 [2024-07-25 09:40:03.592772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:02.989 [2024-07-25 09:40:03.592779] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4564d887-3084-4e80-90ea-54b5014dec23 00:26:02.989 [2024-07-25 09:40:03.592788] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:02.989 [2024-07-25 09:40:03.592797] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:02.989 [2024-07-25 09:40:03.592808] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:02.989 [2024-07-25 09:40:03.592815] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:02.989 [2024-07-25 09:40:03.592823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:02.989 [2024-07-25 09:40:03.592830] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:02.989 [2024-07-25 09:40:03.592838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:02.989 [2024-07-25 09:40:03.592844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:02.989 [2024-07-25 09:40:03.592851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:02.990 [2024-07-25 09:40:03.592859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:02.990 [2024-07-25 09:40:03.592867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:02.990 [2024-07-25 09:40:03.592876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:26:02.990 [2024-07-25 09:40:03.592884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.248 [2024-07-25 09:40:03.612836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.248 [2024-07-25 09:40:03.612865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:03.248 [2024-07-25 09:40:03.612885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.908 ms 00:26:03.248 [2024-07-25 09:40:03.612893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.248 [2024-07-25 09:40:03.613404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.248 [2024-07-25 09:40:03.613417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:03.248 [2024-07-25 09:40:03.613426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.437 ms 00:26:03.248 [2024-07-25 09:40:03.613435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.249 [2024-07-25 09:40:03.678334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.249 [2024-07-25 09:40:03.678368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.249 [2024-07-25 09:40:03.678379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.249 [2024-07-25 09:40:03.678389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.249 [2024-07-25 09:40:03.678468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.249 [2024-07-25 09:40:03.678478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.249 [2024-07-25 09:40:03.678486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.249 [2024-07-25 09:40:03.678494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.249 [2024-07-25 09:40:03.678609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.249 [2024-07-25 09:40:03.678623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.249 [2024-07-25 09:40:03.678631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.249 [2024-07-25 09:40:03.678640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.249 [2024-07-25 09:40:03.678690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.249 [2024-07-25 09:40:03.678701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.249 [2024-07-25 09:40:03.678709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.249 [2024-07-25 09:40:03.678717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.249 [2024-07-25 09:40:03.799920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.249 [2024-07-25 09:40:03.799972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.249 [2024-07-25 09:40:03.799985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.249 [2024-07-25 09:40:03.799995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.895789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.895844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.507 [2024-07-25 09:40:03.895856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.895865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.895992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.896007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.507 [2024-07-25 09:40:03.896015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.896024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.896133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.896147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.507 [2024-07-25 09:40:03.896154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.896163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.896312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.896333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.507 [2024-07-25 09:40:03.896341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.896350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.896413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.896425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:03.507 [2024-07-25 09:40:03.896432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.896441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.896509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.896519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.507 [2024-07-25 09:40:03.896528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.896536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.896602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.507 [2024-07-25 09:40:03.896615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.507 [2024-07-25 09:40:03.896622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.507 [2024-07-25 09:40:03.896631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.507 [2024-07-25 09:40:03.896888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.170 ms, result 0 00:26:03.507 true 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 78678 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 78678 ']' 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 78678 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78678 00:26:03.507 killing process with pid 78678 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78678' 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 78678 00:26:03.507 09:40:03 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 78678 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:11.626 09:40:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:26:11.626 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:26:11.626 fio-3.35 00:26:11.626 Starting 1 thread 00:26:16.903 00:26:16.903 test: (groupid=0, jobs=1): err= 0: pid=78916: Thu Jul 25 09:40:16 2024 00:26:16.903 read: IOPS=847, BW=56.3MiB/s (59.0MB/s)(255MiB/4524msec) 00:26:16.903 slat (nsec): min=6898, max=40499, avg=10331.40, stdev=3091.81 00:26:16.903 clat (usec): min=365, max=921, avg=523.77, stdev=53.77 00:26:16.903 lat (usec): min=375, max=931, avg=534.11, stdev=54.26 00:26:16.903 clat percentiles (usec): 00:26:16.903 | 1.00th=[ 392], 5.00th=[ 457], 10.00th=[ 461], 20.00th=[ 474], 00:26:16.903 | 30.00th=[ 486], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 545], 00:26:16.903 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 578], 95.00th=[ 594], 00:26:16.903 | 99.00th=[ 652], 99.50th=[ 709], 99.90th=[ 906], 99.95th=[ 922], 00:26:16.903 | 99.99th=[ 922] 00:26:16.903 write: IOPS=853, BW=56.7MiB/s (59.4MB/s)(256MiB/4519msec); 0 zone resets 00:26:16.903 slat (nsec): min=16739, max=74131, avg=30531.85, stdev=5376.63 00:26:16.903 clat (usec): min=412, max=1151, avg=598.57, stdev=67.01 00:26:16.903 lat (usec): min=442, max=1185, avg=629.10, stdev=67.46 00:26:16.903 clat percentiles (usec): 00:26:16.903 | 1.00th=[ 478], 5.00th=[ 506], 10.00th=[ 545], 20.00th=[ 562], 00:26:16.903 | 30.00th=[ 570], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 603], 00:26:16.903 | 70.00th=[ 627], 80.00th=[ 644], 90.00th=[ 660], 95.00th=[ 668], 00:26:16.903 | 99.00th=[ 906], 99.50th=[ 988], 99.90th=[ 1090], 99.95th=[ 1123], 00:26:16.903 | 99.99th=[ 1156] 00:26:16.903 bw ( KiB/s): min=55624, max=59296, per=100.00%, avg=58026.67, stdev=1341.17, samples=9 00:26:16.903 iops : min= 818, max= 872, avg=853.33, stdev=19.72, samples=9 00:26:16.903 lat (usec) : 500=21.47%, 750=77.41%, 1000=0.92% 00:26:16.903 lat (msec) : 2=0.20% 00:26:16.903 cpu : usr=99.29%, sys=0.09%, ctx=11, majf=0, minf=1171 00:26:16.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:16.903 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:16.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:16.903 00:26:16.903 Run status group 0 (all jobs): 00:26:16.903 READ: bw=56.3MiB/s (59.0MB/s), 56.3MiB/s-56.3MiB/s (59.0MB/s-59.0MB/s), io=255MiB (267MB), run=4524-4524msec 00:26:16.903 WRITE: bw=56.7MiB/s (59.4MB/s), 56.7MiB/s-56.7MiB/s (59.4MB/s-59.4MB/s), io=256MiB (269MB), run=4519-4519msec 00:26:18.812 ----------------------------------------------------- 00:26:18.812 Suppressions used: 00:26:18.812 count bytes template 00:26:18.812 1 5 /usr/src/fio/parse.c 00:26:18.812 1 8 libtcmalloc_minimal.so 00:26:18.812 1 904 libcrypto.so 00:26:18.812 ----------------------------------------------------- 00:26:18.812 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:18.812 09:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:26:18.812 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:18.812 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:18.812 fio-3.35 00:26:18.812 Starting 2 threads 00:26:50.926 00:26:50.926 first_half: (groupid=0, jobs=1): err= 0: pid=79036: Thu Jul 25 09:40:49 2024 00:26:50.926 read: IOPS=2319, BW=9278KiB/s (9500kB/s)(255MiB/28130msec) 00:26:50.926 slat (nsec): min=3788, max=47771, avg=9314.49, stdev=4177.88 00:26:50.926 clat (usec): min=1152, max=346340, avg=41181.52, stdev=22317.38 00:26:50.926 lat (usec): min=1161, max=346349, avg=41190.83, stdev=22317.80 00:26:50.926 clat percentiles (msec): 00:26:50.926 | 1.00th=[ 9], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:26:50.926 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:26:50.926 | 70.00th=[ 39], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 53], 00:26:50.926 | 99.00th=[ 174], 99.50th=[ 201], 99.90th=[ 279], 99.95th=[ 309], 00:26:50.926 | 99.99th=[ 338] 00:26:50.926 write: IOPS=2779, BW=10.9MiB/s (11.4MB/s)(256MiB/23582msec); 0 zone resets 00:26:50.926 slat (usec): min=4, max=684, avg=11.21, stdev= 9.55 00:26:50.926 clat (usec): min=412, max=100928, avg=13890.16, stdev=23949.46 00:26:50.926 lat (usec): min=425, max=100943, avg=13901.37, stdev=23950.10 00:26:50.926 clat percentiles (usec): 00:26:50.926 | 1.00th=[ 1188], 5.00th=[ 1663], 10.00th=[ 1909], 20.00th=[ 2343], 00:26:50.926 | 30.00th=[ 4047], 40.00th=[ 5669], 50.00th=[ 6652], 60.00th=[ 7570], 00:26:50.926 | 70.00th=[ 8586], 80.00th=[ 12387], 90.00th=[ 16057], 95.00th=[ 88605], 00:26:50.926 | 99.00th=[ 93848], 99.50th=[ 94897], 99.90th=[ 98042], 99.95th=[ 99091], 00:26:50.926 | 99.99th=[100140] 00:26:50.926 bw ( KiB/s): min= 40, max=40928, per=84.22%, avg=18724.57, stdev=12999.81, samples=28 00:26:50.926 iops : min= 10, max=10232, avg=4681.14, stdev=3249.95, samples=28 00:26:50.926 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.12% 00:26:50.926 lat (msec) : 2=6.11%, 4=8.88%, 10=23.18%, 20=7.93%, 50=46.55% 00:26:50.926 lat (msec) : 100=5.86%, 250=1.23%, 500=0.07% 00:26:50.926 cpu : usr=99.29%, sys=0.18%, ctx=45, majf=0, minf=5561 00:26:50.926 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:50.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.927 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:50.927 issued rwts: total=65246,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:50.927 second_half: (groupid=0, jobs=1): err= 0: pid=79037: Thu Jul 25 09:40:49 2024 00:26:50.927 read: IOPS=2307, BW=9228KiB/s (9450kB/s)(255MiB/28257msec) 00:26:50.927 slat (nsec): min=3674, max=79706, avg=9589.10, stdev=3541.68 00:26:50.927 clat (usec): min=989, max=354255, avg=40883.06, stdev=22022.69 00:26:50.927 lat (usec): min=1003, max=354267, avg=40892.65, stdev=22023.14 00:26:50.927 clat percentiles (msec): 00:26:50.927 | 1.00th=[ 8], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:26:50.927 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 39], 60.00th=[ 39], 00:26:50.927 | 70.00th=[ 39], 80.00th=[ 39], 90.00th=[ 44], 95.00th=[ 45], 00:26:50.927 | 99.00th=[ 169], 99.50th=[ 190], 99.90th=[ 209], 99.95th=[ 243], 00:26:50.927 | 99.99th=[ 347] 00:26:50.927 write: IOPS=3161, BW=12.3MiB/s (12.9MB/s)(256MiB/20729msec); 0 zone resets 00:26:50.927 slat (usec): min=4, max=548, avg=11.10, stdev= 6.46 00:26:50.927 clat (usec): min=456, max=101228, avg=14491.10, stdev=24436.70 00:26:50.927 lat (usec): min=469, max=101235, avg=14502.20, stdev=24436.99 00:26:50.927 clat percentiles (usec): 00:26:50.927 | 1.00th=[ 1205], 5.00th=[ 1565], 10.00th=[ 1795], 20.00th=[ 2040], 00:26:50.927 | 30.00th=[ 2376], 40.00th=[ 4080], 50.00th=[ 6128], 60.00th=[ 7832], 00:26:50.927 | 70.00th=[ 10552], 80.00th=[ 13698], 90.00th=[ 40633], 95.00th=[ 89654], 00:26:50.927 | 99.00th=[ 94897], 99.50th=[ 95945], 99.90th=[ 99091], 99.95th=[ 99091], 00:26:50.927 | 99.99th=[100140] 00:26:50.927 bw ( KiB/s): min= 824, max=40760, per=90.70%, avg=20164.92, stdev=11966.84, samples=26 00:26:50.927 iops : min= 206, max=10190, avg=5041.15, stdev=2991.70, samples=26 00:26:50.927 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.10% 00:26:50.927 lat (msec) : 2=9.01%, 4=11.07%, 10=14.79%, 20=10.58%, 50=47.63% 00:26:50.927 lat (msec) : 100=5.30%, 250=1.46%, 500=0.02% 00:26:50.927 cpu : usr=99.02%, sys=0.24%, ctx=279, majf=0, minf=5562 00:26:50.927 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:50.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.927 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:50.927 issued rwts: total=65191,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.927 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:50.927 00:26:50.927 Run status group 0 (all jobs): 00:26:50.927 READ: bw=18.0MiB/s (18.9MB/s), 9228KiB/s-9278KiB/s (9450kB/s-9500kB/s), io=510MiB (534MB), run=28130-28257msec 00:26:50.927 WRITE: bw=21.7MiB/s (22.8MB/s), 10.9MiB/s-12.3MiB/s (11.4MB/s-12.9MB/s), io=512MiB (537MB), run=20729-23582msec 00:26:50.927 ----------------------------------------------------- 00:26:50.927 Suppressions used: 00:26:50.927 count bytes template 00:26:50.927 2 10 /usr/src/fio/parse.c 00:26:50.927 3 288 /usr/src/fio/iolog.c 00:26:50.927 1 8 libtcmalloc_minimal.so 00:26:50.927 1 904 libcrypto.so 00:26:50.927 ----------------------------------------------------- 00:26:50.927 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:50.927 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:51.187 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:51.188 09:40:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:26:51.188 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:26:51.188 fio-3.35 00:26:51.188 Starting 1 thread 00:27:09.329 00:27:09.329 test: (groupid=0, jobs=1): err= 0: pid=79396: Thu Jul 25 09:41:09 2024 00:27:09.329 read: IOPS=5986, BW=23.4MiB/s (24.5MB/s)(255MiB/10892msec) 00:27:09.329 slat (nsec): min=3438, max=44180, avg=7565.51, stdev=2963.76 00:27:09.329 clat (usec): min=654, max=41909, avg=21370.02, stdev=1182.15 00:27:09.329 lat (usec): min=658, max=41918, avg=21377.59, stdev=1182.07 00:27:09.329 clat percentiles (usec): 00:27:09.329 | 1.00th=[20317], 5.00th=[20579], 10.00th=[20579], 20.00th=[20841], 00:27:09.329 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365], 00:27:09.329 | 70.00th=[21627], 80.00th=[21627], 90.00th=[21890], 95.00th=[22152], 00:27:09.329 | 99.00th=[24249], 99.50th=[28705], 99.90th=[34341], 99.95th=[36439], 00:27:09.329 | 99.99th=[41157] 00:27:09.329 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(256MiB/5686msec); 0 zone resets 00:27:09.329 slat (usec): min=4, max=376, avg= 9.16, stdev= 5.46 00:27:09.329 clat (usec): min=648, max=67372, avg=11051.51, stdev=14398.75 00:27:09.329 lat (usec): min=657, max=67379, avg=11060.67, stdev=14398.90 00:27:09.329 clat percentiles (usec): 00:27:09.329 | 1.00th=[ 1156], 5.00th=[ 1418], 10.00th=[ 1614], 20.00th=[ 1876], 00:27:09.329 | 30.00th=[ 2114], 40.00th=[ 2507], 50.00th=[ 6521], 60.00th=[ 7504], 00:27:09.329 | 70.00th=[ 8586], 80.00th=[10552], 90.00th=[43254], 95.00th=[45351], 00:27:09.329 | 99.00th=[47449], 99.50th=[48497], 99.90th=[58983], 99.95th=[59507], 00:27:09.329 | 99.99th=[62653] 00:27:09.329 bw ( KiB/s): min=12944, max=69384, per=94.77%, avg=43690.67, stdev=15187.52, samples=12 00:27:09.329 iops : min= 3236, max=17346, avg=10922.67, stdev=3796.88, samples=12 00:27:09.329 lat (usec) : 750=0.01%, 1000=0.10% 00:27:09.329 lat (msec) : 2=12.82%, 4=8.08%, 10=18.00%, 20=3.19%, 50=57.65% 00:27:09.329 lat (msec) : 100=0.15% 00:27:09.329 cpu : usr=98.89%, sys=0.22%, ctx=28, majf=0, minf=5567 00:27:09.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:09.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:09.329 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:09.329 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:09.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:09.329 00:27:09.329 Run status group 0 (all jobs): 00:27:09.329 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=255MiB (267MB), run=10892-10892msec 00:27:09.329 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=256MiB (268MB), run=5686-5686msec 00:27:11.239 ----------------------------------------------------- 00:27:11.239 Suppressions used: 00:27:11.239 count bytes template 00:27:11.239 1 5 /usr/src/fio/parse.c 00:27:11.239 2 192 /usr/src/fio/iolog.c 00:27:11.239 1 8 libtcmalloc_minimal.so 00:27:11.239 1 904 libcrypto.so 00:27:11.239 ----------------------------------------------------- 00:27:11.239 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:27:11.239 Remove shared memory files 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid62295 /dev/shm/spdk_tgt_trace.pid77600 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:27:11.239 00:27:11.239 real 1m17.150s 00:27:11.239 user 2m47.292s 00:27:11.239 sys 0m3.420s 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:11.239 09:41:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:27:11.239 ************************************ 00:27:11.239 END TEST ftl_fio_basic 00:27:11.239 ************************************ 00:27:11.239 09:41:11 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:11.239 09:41:11 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:11.239 09:41:11 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:11.239 09:41:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:11.239 ************************************ 00:27:11.239 START TEST ftl_bdevperf 00:27:11.239 ************************************ 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:27:11.239 * Looking for test storage... 00:27:11.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=79657 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 79657 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 79657 ']' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:11.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:11.239 09:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:11.239 [2024-07-25 09:41:11.829863] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:11.239 [2024-07-25 09:41:11.829985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79657 ] 00:27:11.499 [2024-07-25 09:41:11.990922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.759 [2024-07-25 09:41:12.198617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:27:12.017 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:27:12.586 09:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:12.586 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:12.586 { 00:27:12.586 "name": "nvme0n1", 00:27:12.586 "aliases": [ 00:27:12.586 "a20d64d3-bf0f-4acd-9c76-0cc04dea6d05" 00:27:12.586 ], 00:27:12.586 "product_name": "NVMe disk", 00:27:12.586 "block_size": 4096, 00:27:12.586 "num_blocks": 1310720, 00:27:12.586 "uuid": "a20d64d3-bf0f-4acd-9c76-0cc04dea6d05", 00:27:12.586 "assigned_rate_limits": { 00:27:12.586 "rw_ios_per_sec": 0, 00:27:12.586 "rw_mbytes_per_sec": 0, 00:27:12.586 "r_mbytes_per_sec": 0, 00:27:12.586 "w_mbytes_per_sec": 0 00:27:12.586 }, 00:27:12.586 "claimed": true, 00:27:12.586 "claim_type": "read_many_write_one", 00:27:12.586 "zoned": false, 00:27:12.586 "supported_io_types": { 00:27:12.586 "read": true, 00:27:12.586 "write": true, 00:27:12.586 "unmap": true, 00:27:12.586 "flush": true, 00:27:12.586 "reset": true, 00:27:12.586 "nvme_admin": true, 00:27:12.586 "nvme_io": true, 00:27:12.586 "nvme_io_md": false, 00:27:12.586 "write_zeroes": true, 00:27:12.586 "zcopy": false, 00:27:12.586 "get_zone_info": false, 00:27:12.586 "zone_management": false, 00:27:12.586 "zone_append": false, 00:27:12.586 "compare": true, 00:27:12.586 "compare_and_write": false, 00:27:12.586 "abort": true, 00:27:12.586 "seek_hole": false, 00:27:12.586 "seek_data": false, 00:27:12.586 "copy": true, 00:27:12.586 "nvme_iov_md": false 00:27:12.586 }, 00:27:12.586 "driver_specific": { 00:27:12.586 "nvme": [ 00:27:12.586 { 00:27:12.586 "pci_address": "0000:00:11.0", 00:27:12.586 "trid": { 00:27:12.586 "trtype": "PCIe", 00:27:12.586 "traddr": "0000:00:11.0" 00:27:12.586 }, 00:27:12.586 "ctrlr_data": { 00:27:12.586 "cntlid": 0, 00:27:12.586 "vendor_id": "0x1b36", 00:27:12.586 "model_number": "QEMU NVMe Ctrl", 00:27:12.586 "serial_number": "12341", 00:27:12.586 "firmware_revision": "8.0.0", 00:27:12.586 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:12.586 "oacs": { 00:27:12.586 "security": 0, 00:27:12.586 "format": 1, 00:27:12.586 "firmware": 0, 00:27:12.586 "ns_manage": 1 00:27:12.586 }, 00:27:12.586 "multi_ctrlr": false, 00:27:12.587 "ana_reporting": false 00:27:12.587 }, 00:27:12.587 "vs": { 00:27:12.587 "nvme_version": "1.4" 00:27:12.587 }, 00:27:12.587 "ns_data": { 00:27:12.587 "id": 1, 00:27:12.587 "can_share": false 00:27:12.587 } 00:27:12.587 } 00:27:12.587 ], 00:27:12.587 "mp_policy": "active_passive" 00:27:12.587 } 00:27:12.587 } 00:27:12.587 ]' 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:12.587 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:12.851 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=79228bec-1ca3-4552-b380-4c1e7a347206 00:27:12.851 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:27:12.851 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 79228bec-1ca3-4552-b380-4c1e7a347206 00:27:13.114 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=aaac27f8-9ed2-4615-af36-f764bb6f8829 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aaac27f8-9ed2-4615-af36-f764bb6f8829 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:13.375 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:27:13.635 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:27:13.635 09:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:13.635 { 00:27:13.635 "name": "fb79dd4b-ef7a-4889-a09c-0b2cce89058e", 00:27:13.635 "aliases": [ 00:27:13.635 "lvs/nvme0n1p0" 00:27:13.635 ], 00:27:13.635 "product_name": "Logical Volume", 00:27:13.635 "block_size": 4096, 00:27:13.635 "num_blocks": 26476544, 00:27:13.635 "uuid": "fb79dd4b-ef7a-4889-a09c-0b2cce89058e", 00:27:13.635 "assigned_rate_limits": { 00:27:13.635 "rw_ios_per_sec": 0, 00:27:13.635 "rw_mbytes_per_sec": 0, 00:27:13.635 "r_mbytes_per_sec": 0, 00:27:13.635 "w_mbytes_per_sec": 0 00:27:13.635 }, 00:27:13.635 "claimed": false, 00:27:13.635 "zoned": false, 00:27:13.635 "supported_io_types": { 00:27:13.635 "read": true, 00:27:13.635 "write": true, 00:27:13.635 "unmap": true, 00:27:13.635 "flush": false, 00:27:13.635 "reset": true, 00:27:13.635 "nvme_admin": false, 00:27:13.635 "nvme_io": false, 00:27:13.635 "nvme_io_md": false, 00:27:13.635 "write_zeroes": true, 00:27:13.635 "zcopy": false, 00:27:13.635 "get_zone_info": false, 00:27:13.635 "zone_management": false, 00:27:13.635 "zone_append": false, 00:27:13.635 "compare": false, 00:27:13.635 "compare_and_write": false, 00:27:13.635 "abort": false, 00:27:13.635 "seek_hole": true, 00:27:13.635 "seek_data": true, 00:27:13.635 "copy": false, 00:27:13.635 "nvme_iov_md": false 00:27:13.635 }, 00:27:13.635 "driver_specific": { 00:27:13.635 "lvol": { 00:27:13.635 "lvol_store_uuid": "aaac27f8-9ed2-4615-af36-f764bb6f8829", 00:27:13.635 "base_bdev": "nvme0n1", 00:27:13.635 "thin_provision": true, 00:27:13.635 "num_allocated_clusters": 0, 00:27:13.635 "snapshot": false, 00:27:13.635 "clone": false, 00:27:13.635 "esnap_clone": false 00:27:13.635 } 00:27:13.635 } 00:27:13.635 } 00:27:13.635 ]' 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:13.635 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:27:13.895 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:14.155 { 00:27:14.155 "name": "fb79dd4b-ef7a-4889-a09c-0b2cce89058e", 00:27:14.155 "aliases": [ 00:27:14.155 "lvs/nvme0n1p0" 00:27:14.155 ], 00:27:14.155 "product_name": "Logical Volume", 00:27:14.155 "block_size": 4096, 00:27:14.155 "num_blocks": 26476544, 00:27:14.155 "uuid": "fb79dd4b-ef7a-4889-a09c-0b2cce89058e", 00:27:14.155 "assigned_rate_limits": { 00:27:14.155 "rw_ios_per_sec": 0, 00:27:14.155 "rw_mbytes_per_sec": 0, 00:27:14.155 "r_mbytes_per_sec": 0, 00:27:14.155 "w_mbytes_per_sec": 0 00:27:14.155 }, 00:27:14.155 "claimed": false, 00:27:14.155 "zoned": false, 00:27:14.155 "supported_io_types": { 00:27:14.155 "read": true, 00:27:14.155 "write": true, 00:27:14.155 "unmap": true, 00:27:14.155 "flush": false, 00:27:14.155 "reset": true, 00:27:14.155 "nvme_admin": false, 00:27:14.155 "nvme_io": false, 00:27:14.155 "nvme_io_md": false, 00:27:14.155 "write_zeroes": true, 00:27:14.155 "zcopy": false, 00:27:14.155 "get_zone_info": false, 00:27:14.155 "zone_management": false, 00:27:14.155 "zone_append": false, 00:27:14.155 "compare": false, 00:27:14.155 "compare_and_write": false, 00:27:14.155 "abort": false, 00:27:14.155 "seek_hole": true, 00:27:14.155 "seek_data": true, 00:27:14.155 "copy": false, 00:27:14.155 "nvme_iov_md": false 00:27:14.155 }, 00:27:14.155 "driver_specific": { 00:27:14.155 "lvol": { 00:27:14.155 "lvol_store_uuid": "aaac27f8-9ed2-4615-af36-f764bb6f8829", 00:27:14.155 "base_bdev": "nvme0n1", 00:27:14.155 "thin_provision": true, 00:27:14.155 "num_allocated_clusters": 0, 00:27:14.155 "snapshot": false, 00:27:14.155 "clone": false, 00:27:14.155 "esnap_clone": false 00:27:14.155 } 00:27:14.155 } 00:27:14.155 } 00:27:14.155 ]' 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:27:14.155 09:41:14 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:27:14.415 09:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fb79dd4b-ef7a-4889-a09c-0b2cce89058e 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:14.675 { 00:27:14.675 "name": "fb79dd4b-ef7a-4889-a09c-0b2cce89058e", 00:27:14.675 "aliases": [ 00:27:14.675 "lvs/nvme0n1p0" 00:27:14.675 ], 00:27:14.675 "product_name": "Logical Volume", 00:27:14.675 "block_size": 4096, 00:27:14.675 "num_blocks": 26476544, 00:27:14.675 "uuid": "fb79dd4b-ef7a-4889-a09c-0b2cce89058e", 00:27:14.675 "assigned_rate_limits": { 00:27:14.675 "rw_ios_per_sec": 0, 00:27:14.675 "rw_mbytes_per_sec": 0, 00:27:14.675 "r_mbytes_per_sec": 0, 00:27:14.675 "w_mbytes_per_sec": 0 00:27:14.675 }, 00:27:14.675 "claimed": false, 00:27:14.675 "zoned": false, 00:27:14.675 "supported_io_types": { 00:27:14.675 "read": true, 00:27:14.675 "write": true, 00:27:14.675 "unmap": true, 00:27:14.675 "flush": false, 00:27:14.675 "reset": true, 00:27:14.675 "nvme_admin": false, 00:27:14.675 "nvme_io": false, 00:27:14.675 "nvme_io_md": false, 00:27:14.675 "write_zeroes": true, 00:27:14.675 "zcopy": false, 00:27:14.675 "get_zone_info": false, 00:27:14.675 "zone_management": false, 00:27:14.675 "zone_append": false, 00:27:14.675 "compare": false, 00:27:14.675 "compare_and_write": false, 00:27:14.675 "abort": false, 00:27:14.675 "seek_hole": true, 00:27:14.675 "seek_data": true, 00:27:14.675 "copy": false, 00:27:14.675 "nvme_iov_md": false 00:27:14.675 }, 00:27:14.675 "driver_specific": { 00:27:14.675 "lvol": { 00:27:14.675 "lvol_store_uuid": "aaac27f8-9ed2-4615-af36-f764bb6f8829", 00:27:14.675 "base_bdev": "nvme0n1", 00:27:14.675 "thin_provision": true, 00:27:14.675 "num_allocated_clusters": 0, 00:27:14.675 "snapshot": false, 00:27:14.675 "clone": false, 00:27:14.675 "esnap_clone": false 00:27:14.675 } 00:27:14.675 } 00:27:14.675 } 00:27:14.675 ]' 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:27:14.675 09:41:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fb79dd4b-ef7a-4889-a09c-0b2cce89058e -c nvc0n1p0 --l2p_dram_limit 20 00:27:14.936 [2024-07-25 09:41:15.420447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.936 [2024-07-25 09:41:15.420495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:14.936 [2024-07-25 09:41:15.420511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:14.936 [2024-07-25 09:41:15.420518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.936 [2024-07-25 09:41:15.420573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.420582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:14.937 [2024-07-25 09:41:15.420594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:14.937 [2024-07-25 09:41:15.420601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.420619] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:14.937 [2024-07-25 09:41:15.421808] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:14.937 [2024-07-25 09:41:15.421849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.421858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:14.937 [2024-07-25 09:41:15.421868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:27:14.937 [2024-07-25 09:41:15.421875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.421961] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1a45bee-5e6e-4f49-b197-7e8545a54920 00:27:14.937 [2024-07-25 09:41:15.423378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.423414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:14.937 [2024-07-25 09:41:15.423442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:14.937 [2024-07-25 09:41:15.423452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.430966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.430999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:14.937 [2024-07-25 09:41:15.431024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.487 ms 00:27:14.937 [2024-07-25 09:41:15.431034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.431124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.431142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:14.937 [2024-07-25 09:41:15.431150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:27:14.937 [2024-07-25 09:41:15.431161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.431223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.431234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:14.937 [2024-07-25 09:41:15.431242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:14.937 [2024-07-25 09:41:15.431269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.431291] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:14.937 [2024-07-25 09:41:15.436751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.436779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:14.937 [2024-07-25 09:41:15.436791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.476 ms 00:27:14.937 [2024-07-25 09:41:15.436799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.436832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.436840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:14.937 [2024-07-25 09:41:15.436850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:14.937 [2024-07-25 09:41:15.436857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.436903] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:14.937 [2024-07-25 09:41:15.437028] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:14.937 [2024-07-25 09:41:15.437059] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:14.937 [2024-07-25 09:41:15.437069] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:14.937 [2024-07-25 09:41:15.437080] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437089] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437098] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:14.937 [2024-07-25 09:41:15.437105] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:14.937 [2024-07-25 09:41:15.437115] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:14.937 [2024-07-25 09:41:15.437122] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:14.937 [2024-07-25 09:41:15.437132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.437139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:14.937 [2024-07-25 09:41:15.437151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:27:14.937 [2024-07-25 09:41:15.437158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.437241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.937 [2024-07-25 09:41:15.437264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:14.937 [2024-07-25 09:41:15.437274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:14.937 [2024-07-25 09:41:15.437282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.937 [2024-07-25 09:41:15.437361] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:14.937 [2024-07-25 09:41:15.437371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:14.937 [2024-07-25 09:41:15.437381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:14.937 [2024-07-25 09:41:15.437408] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437417] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:14.937 [2024-07-25 09:41:15.437431] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437438] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.937 [2024-07-25 09:41:15.437446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:14.937 [2024-07-25 09:41:15.437455] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:14.937 [2024-07-25 09:41:15.437463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:14.937 [2024-07-25 09:41:15.437470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:14.937 [2024-07-25 09:41:15.437478] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:14.937 [2024-07-25 09:41:15.437485] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:14.937 [2024-07-25 09:41:15.437501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437522] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:14.937 [2024-07-25 09:41:15.437538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437544] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:14.937 [2024-07-25 09:41:15.437557] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:14.937 [2024-07-25 09:41:15.437579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437586] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:14.937 [2024-07-25 09:41:15.437599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:14.937 [2024-07-25 09:41:15.437612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:14.937 [2024-07-25 09:41:15.437622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:14.937 [2024-07-25 09:41:15.437627] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.938 [2024-07-25 09:41:15.437635] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:14.938 [2024-07-25 09:41:15.437641] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:14.938 [2024-07-25 09:41:15.437648] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:14.938 [2024-07-25 09:41:15.437654] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:14.938 [2024-07-25 09:41:15.437663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:14.938 [2024-07-25 09:41:15.437670] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.938 [2024-07-25 09:41:15.437678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:14.938 [2024-07-25 09:41:15.437684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:14.938 [2024-07-25 09:41:15.437691] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.938 [2024-07-25 09:41:15.437699] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:14.938 [2024-07-25 09:41:15.437707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:14.938 [2024-07-25 09:41:15.437714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:14.938 [2024-07-25 09:41:15.437722] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:14.938 [2024-07-25 09:41:15.437729] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:14.938 [2024-07-25 09:41:15.437738] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:14.938 [2024-07-25 09:41:15.437745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:14.938 [2024-07-25 09:41:15.437753] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:14.938 [2024-07-25 09:41:15.437758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:14.938 [2024-07-25 09:41:15.437766] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:14.938 [2024-07-25 09:41:15.437776] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:14.938 [2024-07-25 09:41:15.437787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:14.938 [2024-07-25 09:41:15.437805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:14.938 [2024-07-25 09:41:15.437812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:14.938 [2024-07-25 09:41:15.437821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:14.938 [2024-07-25 09:41:15.437827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:14.938 [2024-07-25 09:41:15.437835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:14.938 [2024-07-25 09:41:15.437842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:14.938 [2024-07-25 09:41:15.437850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:14.938 [2024-07-25 09:41:15.437857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:14.938 [2024-07-25 09:41:15.437868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:14.938 [2024-07-25 09:41:15.437905] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:14.938 [2024-07-25 09:41:15.437914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437921] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:14.938 [2024-07-25 09:41:15.437929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:14.938 [2024-07-25 09:41:15.437936] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:14.938 [2024-07-25 09:41:15.437944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:14.938 [2024-07-25 09:41:15.437953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:14.938 [2024-07-25 09:41:15.437965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:14.938 [2024-07-25 09:41:15.437973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:27:14.938 [2024-07-25 09:41:15.437981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:14.938 [2024-07-25 09:41:15.438014] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:14.938 [2024-07-25 09:41:15.438026] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:19.135 [2024-07-25 09:41:19.221764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.135 [2024-07-25 09:41:19.221827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:19.135 [2024-07-25 09:41:19.221843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3791.036 ms 00:27:19.135 [2024-07-25 09:41:19.221852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.135 [2024-07-25 09:41:19.267821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.135 [2024-07-25 09:41:19.267874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.135 [2024-07-25 09:41:19.267887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.754 ms 00:27:19.135 [2024-07-25 09:41:19.267896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.135 [2024-07-25 09:41:19.268021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.135 [2024-07-25 09:41:19.268033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:19.135 [2024-07-25 09:41:19.268043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:19.135 [2024-07-25 09:41:19.268053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.135 [2024-07-25 09:41:19.313504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.135 [2024-07-25 09:41:19.313548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.135 [2024-07-25 09:41:19.313558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.493 ms 00:27:19.135 [2024-07-25 09:41:19.313567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.135 [2024-07-25 09:41:19.313598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.135 [2024-07-25 09:41:19.313607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.135 [2024-07-25 09:41:19.313615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:19.135 [2024-07-25 09:41:19.313625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.314073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.314095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.136 [2024-07-25 09:41:19.314103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:27:19.136 [2024-07-25 09:41:19.314113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.314200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.314216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.136 [2024-07-25 09:41:19.314227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:27:19.136 [2024-07-25 09:41:19.314248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.333613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.333648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.136 [2024-07-25 09:41:19.333657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.385 ms 00:27:19.136 [2024-07-25 09:41:19.333666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.345425] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:27:19.136 [2024-07-25 09:41:19.351026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.351055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.136 [2024-07-25 09:41:19.351067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.323 ms 00:27:19.136 [2024-07-25 09:41:19.351074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.449619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.449697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:19.136 [2024-07-25 09:41:19.449712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.701 ms 00:27:19.136 [2024-07-25 09:41:19.449720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.449889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.449901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.136 [2024-07-25 09:41:19.449914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:27:19.136 [2024-07-25 09:41:19.449921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.486202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.486240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:19.136 [2024-07-25 09:41:19.486252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.305 ms 00:27:19.136 [2024-07-25 09:41:19.486260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.520444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.520475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:19.136 [2024-07-25 09:41:19.520488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.213 ms 00:27:19.136 [2024-07-25 09:41:19.520495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.521242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.521266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.136 [2024-07-25 09:41:19.521276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:27:19.136 [2024-07-25 09:41:19.521284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.625840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.625884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:19.136 [2024-07-25 09:41:19.625903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.713 ms 00:27:19.136 [2024-07-25 09:41:19.625910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.661581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.661617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:19.136 [2024-07-25 09:41:19.661630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.698 ms 00:27:19.136 [2024-07-25 09:41:19.661639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.697512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.697547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:19.136 [2024-07-25 09:41:19.697559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.906 ms 00:27:19.136 [2024-07-25 09:41:19.697565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.733138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.733170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.136 [2024-07-25 09:41:19.733181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.606 ms 00:27:19.136 [2024-07-25 09:41:19.733189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.733226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.733249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.136 [2024-07-25 09:41:19.733262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:19.136 [2024-07-25 09:41:19.733269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.733351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.136 [2024-07-25 09:41:19.733362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.136 [2024-07-25 09:41:19.733372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:27:19.136 [2024-07-25 09:41:19.733381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.136 [2024-07-25 09:41:19.734437] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4321.835 ms, result 0 00:27:19.136 { 00:27:19.136 "name": "ftl0", 00:27:19.136 "uuid": "d1a45bee-5e6e-4f49-b197-7e8545a54920" 00:27:19.136 } 00:27:19.396 09:41:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:27:19.396 09:41:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:27:19.396 09:41:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:27:19.396 09:41:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:27:19.396 [2024-07-25 09:41:19.982217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:19.396 I/O size of 69632 is greater than zero copy threshold (65536). 00:27:19.396 Zero copy mechanism will not be used. 00:27:19.396 Running I/O for 4 seconds... 00:27:23.591 00:27:23.591 Latency(us) 00:27:23.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:23.591 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:27:23.591 ftl0 : 4.00 1529.17 101.55 0.00 0.00 685.83 248.62 7669.72 00:27:23.591 =================================================================================================================== 00:27:23.591 Total : 1529.17 101.55 0.00 0.00 685.83 248.62 7669.72 00:27:23.591 [2024-07-25 09:41:23.984058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:23.591 0 00:27:23.591 09:41:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:27:23.591 [2024-07-25 09:41:24.092571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:23.591 Running I/O for 4 seconds... 00:27:27.782 00:27:27.782 Latency(us) 00:27:27.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.782 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:27:27.782 ftl0 : 4.01 10620.90 41.49 0.00 0.00 12027.89 253.99 24611.77 00:27:27.782 =================================================================================================================== 00:27:27.782 Total : 10620.90 41.49 0.00 0.00 12027.89 0.00 24611.77 00:27:27.782 [2024-07-25 09:41:28.107046] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:27.782 0 00:27:27.782 09:41:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:27:27.782 [2024-07-25 09:41:28.230875] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:27:27.782 Running I/O for 4 seconds... 00:27:31.969 00:27:31.969 Latency(us) 00:27:31.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.969 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:31.969 Verification LBA range: start 0x0 length 0x1400000 00:27:31.969 ftl0 : 4.01 8291.48 32.39 0.00 0.00 15390.12 277.24 37318.32 00:27:31.969 =================================================================================================================== 00:27:31.969 Total : 8291.48 32.39 0.00 0.00 15390.12 0.00 37318.32 00:27:31.969 [2024-07-25 09:41:32.249777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:27:31.969 0 00:27:31.969 09:41:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:27:31.969 [2024-07-25 09:41:32.432159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.969 [2024-07-25 09:41:32.432224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:31.969 [2024-07-25 09:41:32.432237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:31.969 [2024-07-25 09:41:32.432258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.969 [2024-07-25 09:41:32.432281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:31.969 [2024-07-25 09:41:32.435942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.969 [2024-07-25 09:41:32.435974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:31.969 [2024-07-25 09:41:32.435983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:27:31.969 [2024-07-25 09:41:32.435992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:31.969 [2024-07-25 09:41:32.437948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:31.969 [2024-07-25 09:41:32.437986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:31.969 [2024-07-25 09:41:32.437996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.941 ms 00:27:31.969 [2024-07-25 09:41:32.438006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.229 [2024-07-25 09:41:32.650212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.650305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:32.230 [2024-07-25 09:41:32.650320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 212.597 ms 00:27:32.230 [2024-07-25 09:41:32.650335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.655502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.655532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:32.230 [2024-07-25 09:41:32.655541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.121 ms 00:27:32.230 [2024-07-25 09:41:32.655566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.690523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.690560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:32.230 [2024-07-25 09:41:32.690569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.968 ms 00:27:32.230 [2024-07-25 09:41:32.690577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.711617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.711654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:32.230 [2024-07-25 09:41:32.711667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.048 ms 00:27:32.230 [2024-07-25 09:41:32.711675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.711822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.711835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:32.230 [2024-07-25 09:41:32.711843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:32.230 [2024-07-25 09:41:32.711861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.747520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.747553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:32.230 [2024-07-25 09:41:32.747562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.714 ms 00:27:32.230 [2024-07-25 09:41:32.747570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.781556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.781589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:32.230 [2024-07-25 09:41:32.781598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.004 ms 00:27:32.230 [2024-07-25 09:41:32.781606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.230 [2024-07-25 09:41:32.815804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.230 [2024-07-25 09:41:32.815838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:32.230 [2024-07-25 09:41:32.815864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.216 ms 00:27:32.230 [2024-07-25 09:41:32.815873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.492 [2024-07-25 09:41:32.851036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.492 [2024-07-25 09:41:32.851070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:32.492 [2024-07-25 09:41:32.851079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.155 ms 00:27:32.492 [2024-07-25 09:41:32.851089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.492 [2024-07-25 09:41:32.851135] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.492 [2024-07-25 09:41:32.851150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.492 [2024-07-25 09:41:32.851709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.851999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.852009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.852016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.493 [2024-07-25 09:41:32.852032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.493 [2024-07-25 09:41:32.852039] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1a45bee-5e6e-4f49-b197-7e8545a54920 00:27:32.493 [2024-07-25 09:41:32.852048] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:32.493 [2024-07-25 09:41:32.852054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:32.493 [2024-07-25 09:41:32.852062] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:32.493 [2024-07-25 09:41:32.852071] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:32.493 [2024-07-25 09:41:32.852079] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.493 [2024-07-25 09:41:32.852087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.493 [2024-07-25 09:41:32.852095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.493 [2024-07-25 09:41:32.852101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.493 [2024-07-25 09:41:32.852110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.493 [2024-07-25 09:41:32.852117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.493 [2024-07-25 09:41:32.852126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.493 [2024-07-25 09:41:32.852134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:27:32.493 [2024-07-25 09:41:32.852142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:32.871084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.493 [2024-07-25 09:41:32.871117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:32.493 [2024-07-25 09:41:32.871126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.939 ms 00:27:32.493 [2024-07-25 09:41:32.871134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:32.871650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.493 [2024-07-25 09:41:32.871666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:32.493 [2024-07-25 09:41:32.871674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:27:32.493 [2024-07-25 09:41:32.871682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:32.916181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.493 [2024-07-25 09:41:32.916213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.493 [2024-07-25 09:41:32.916237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.493 [2024-07-25 09:41:32.916253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:32.916301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.493 [2024-07-25 09:41:32.916311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.493 [2024-07-25 09:41:32.916318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.493 [2024-07-25 09:41:32.916326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:32.916401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.493 [2024-07-25 09:41:32.916417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.493 [2024-07-25 09:41:32.916424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.493 [2024-07-25 09:41:32.916433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:32.916447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.493 [2024-07-25 09:41:32.916457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.493 [2024-07-25 09:41:32.916464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.493 [2024-07-25 09:41:32.916473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.493 [2024-07-25 09:41:33.026386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.493 [2024-07-25 09:41:33.026467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.493 [2024-07-25 09:41:33.026479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.493 [2024-07-25 09:41:33.026490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.754 [2024-07-25 09:41:33.119309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.754 [2024-07-25 09:41:33.119440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.754 [2024-07-25 09:41:33.119503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.754 [2024-07-25 09:41:33.119649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:32.754 [2024-07-25 09:41:33.119741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.754 [2024-07-25 09:41:33.119804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.754 [2024-07-25 09:41:33.119865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.754 [2024-07-25 09:41:33.119872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.754 [2024-07-25 09:41:33.119881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.754 [2024-07-25 09:41:33.119999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 689.129 ms, result 0 00:27:32.754 true 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 79657 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 79657 ']' 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 79657 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79657 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:32.754 killing process with pid 79657 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79657' 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 79657 00:27:32.754 Received shutdown signal, test time was about 4.000000 seconds 00:27:32.754 00:27:32.754 Latency(us) 00:27:32.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.754 =================================================================================================================== 00:27:32.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:32.754 09:41:33 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 79657 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:39.320 Remove shared memory files 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:27:39.320 00:27:39.320 real 0m27.845s 00:27:39.320 user 0m30.097s 00:27:39.320 sys 0m1.212s 00:27:39.320 09:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:39.321 09:41:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:39.321 ************************************ 00:27:39.321 END TEST ftl_bdevperf 00:27:39.321 ************************************ 00:27:39.321 09:41:39 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:39.321 09:41:39 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:27:39.321 09:41:39 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:39.321 09:41:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:39.321 ************************************ 00:27:39.321 START TEST ftl_trim 00:27:39.321 ************************************ 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:27:39.321 * Looking for test storage... 00:27:39.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=80053 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 80053 00:27:39.321 09:41:39 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 80053 ']' 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:39.321 09:41:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:27:39.321 [2024-07-25 09:41:39.752564] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:39.321 [2024-07-25 09:41:39.752678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80053 ] 00:27:39.321 [2024-07-25 09:41:39.915335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:39.580 [2024-07-25 09:41:40.132207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.580 [2024-07-25 09:41:40.132367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.580 [2024-07-25 09:41:40.132406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.518 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.518 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:27:40.518 09:41:41 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:40.518 09:41:41 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:27:40.518 09:41:41 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:40.518 09:41:41 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:27:40.518 09:41:41 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:27:40.518 09:41:41 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:40.776 09:41:41 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:40.776 09:41:41 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:27:40.776 09:41:41 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:40.777 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:27:40.777 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:40.777 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:27:40.777 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:27:40.777 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:41.035 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:41.035 { 00:27:41.035 "name": "nvme0n1", 00:27:41.035 "aliases": [ 00:27:41.035 "7ad634a2-673b-460e-bb24-f739d91c1c15" 00:27:41.035 ], 00:27:41.035 "product_name": "NVMe disk", 00:27:41.035 "block_size": 4096, 00:27:41.035 "num_blocks": 1310720, 00:27:41.035 "uuid": "7ad634a2-673b-460e-bb24-f739d91c1c15", 00:27:41.035 "assigned_rate_limits": { 00:27:41.035 "rw_ios_per_sec": 0, 00:27:41.035 "rw_mbytes_per_sec": 0, 00:27:41.035 "r_mbytes_per_sec": 0, 00:27:41.035 "w_mbytes_per_sec": 0 00:27:41.035 }, 00:27:41.035 "claimed": true, 00:27:41.035 "claim_type": "read_many_write_one", 00:27:41.035 "zoned": false, 00:27:41.035 "supported_io_types": { 00:27:41.035 "read": true, 00:27:41.035 "write": true, 00:27:41.035 "unmap": true, 00:27:41.035 "flush": true, 00:27:41.035 "reset": true, 00:27:41.035 "nvme_admin": true, 00:27:41.036 "nvme_io": true, 00:27:41.036 "nvme_io_md": false, 00:27:41.036 "write_zeroes": true, 00:27:41.036 "zcopy": false, 00:27:41.036 "get_zone_info": false, 00:27:41.036 "zone_management": false, 00:27:41.036 "zone_append": false, 00:27:41.036 "compare": true, 00:27:41.036 "compare_and_write": false, 00:27:41.036 "abort": true, 00:27:41.036 "seek_hole": false, 00:27:41.036 "seek_data": false, 00:27:41.036 "copy": true, 00:27:41.036 "nvme_iov_md": false 00:27:41.036 }, 00:27:41.036 "driver_specific": { 00:27:41.036 "nvme": [ 00:27:41.036 { 00:27:41.036 "pci_address": "0000:00:11.0", 00:27:41.036 "trid": { 00:27:41.036 "trtype": "PCIe", 00:27:41.036 "traddr": "0000:00:11.0" 00:27:41.036 }, 00:27:41.036 "ctrlr_data": { 00:27:41.036 "cntlid": 0, 00:27:41.036 "vendor_id": "0x1b36", 00:27:41.036 "model_number": "QEMU NVMe Ctrl", 00:27:41.036 "serial_number": "12341", 00:27:41.036 "firmware_revision": "8.0.0", 00:27:41.036 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:41.036 "oacs": { 00:27:41.036 "security": 0, 00:27:41.036 "format": 1, 00:27:41.036 "firmware": 0, 00:27:41.036 "ns_manage": 1 00:27:41.036 }, 00:27:41.036 "multi_ctrlr": false, 00:27:41.036 "ana_reporting": false 00:27:41.036 }, 00:27:41.036 "vs": { 00:27:41.036 "nvme_version": "1.4" 00:27:41.036 }, 00:27:41.036 "ns_data": { 00:27:41.036 "id": 1, 00:27:41.036 "can_share": false 00:27:41.036 } 00:27:41.036 } 00:27:41.036 ], 00:27:41.036 "mp_policy": "active_passive" 00:27:41.036 } 00:27:41.036 } 00:27:41.036 ]' 00:27:41.036 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:41.036 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:27:41.036 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:41.036 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:27:41.036 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:27:41.036 09:41:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:27:41.036 09:41:41 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:27:41.036 09:41:41 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:41.036 09:41:41 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:27:41.036 09:41:41 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:41.036 09:41:41 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:41.295 09:41:41 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=aaac27f8-9ed2-4615-af36-f764bb6f8829 00:27:41.295 09:41:41 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:27:41.295 09:41:41 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aaac27f8-9ed2-4615-af36-f764bb6f8829 00:27:41.295 09:41:41 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:41.555 09:41:42 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=454451de-6c76-40af-9c09-a25c17777091 00:27:41.555 09:41:42 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 454451de-6c76-40af-9c09-a25c17777091 00:27:41.814 09:41:42 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:41.815 09:41:42 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:41.815 09:41:42 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:27:41.815 09:41:42 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:41.815 09:41:42 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:41.815 09:41:42 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:27:41.815 09:41:42 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:41.815 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:41.815 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:41.815 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:27:41.815 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:27:41.815 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:42.074 { 00:27:42.074 "name": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:42.074 "aliases": [ 00:27:42.074 "lvs/nvme0n1p0" 00:27:42.074 ], 00:27:42.074 "product_name": "Logical Volume", 00:27:42.074 "block_size": 4096, 00:27:42.074 "num_blocks": 26476544, 00:27:42.074 "uuid": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:42.074 "assigned_rate_limits": { 00:27:42.074 "rw_ios_per_sec": 0, 00:27:42.074 "rw_mbytes_per_sec": 0, 00:27:42.074 "r_mbytes_per_sec": 0, 00:27:42.074 "w_mbytes_per_sec": 0 00:27:42.074 }, 00:27:42.074 "claimed": false, 00:27:42.074 "zoned": false, 00:27:42.074 "supported_io_types": { 00:27:42.074 "read": true, 00:27:42.074 "write": true, 00:27:42.074 "unmap": true, 00:27:42.074 "flush": false, 00:27:42.074 "reset": true, 00:27:42.074 "nvme_admin": false, 00:27:42.074 "nvme_io": false, 00:27:42.074 "nvme_io_md": false, 00:27:42.074 "write_zeroes": true, 00:27:42.074 "zcopy": false, 00:27:42.074 "get_zone_info": false, 00:27:42.074 "zone_management": false, 00:27:42.074 "zone_append": false, 00:27:42.074 "compare": false, 00:27:42.074 "compare_and_write": false, 00:27:42.074 "abort": false, 00:27:42.074 "seek_hole": true, 00:27:42.074 "seek_data": true, 00:27:42.074 "copy": false, 00:27:42.074 "nvme_iov_md": false 00:27:42.074 }, 00:27:42.074 "driver_specific": { 00:27:42.074 "lvol": { 00:27:42.074 "lvol_store_uuid": "454451de-6c76-40af-9c09-a25c17777091", 00:27:42.074 "base_bdev": "nvme0n1", 00:27:42.074 "thin_provision": true, 00:27:42.074 "num_allocated_clusters": 0, 00:27:42.074 "snapshot": false, 00:27:42.074 "clone": false, 00:27:42.074 "esnap_clone": false 00:27:42.074 } 00:27:42.074 } 00:27:42.074 } 00:27:42.074 ]' 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:42.074 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:27:42.074 09:41:42 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:27:42.074 09:41:42 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:27:42.074 09:41:42 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:42.333 09:41:42 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:42.333 09:41:42 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:42.333 09:41:42 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:42.334 { 00:27:42.334 "name": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:42.334 "aliases": [ 00:27:42.334 "lvs/nvme0n1p0" 00:27:42.334 ], 00:27:42.334 "product_name": "Logical Volume", 00:27:42.334 "block_size": 4096, 00:27:42.334 "num_blocks": 26476544, 00:27:42.334 "uuid": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:42.334 "assigned_rate_limits": { 00:27:42.334 "rw_ios_per_sec": 0, 00:27:42.334 "rw_mbytes_per_sec": 0, 00:27:42.334 "r_mbytes_per_sec": 0, 00:27:42.334 "w_mbytes_per_sec": 0 00:27:42.334 }, 00:27:42.334 "claimed": false, 00:27:42.334 "zoned": false, 00:27:42.334 "supported_io_types": { 00:27:42.334 "read": true, 00:27:42.334 "write": true, 00:27:42.334 "unmap": true, 00:27:42.334 "flush": false, 00:27:42.334 "reset": true, 00:27:42.334 "nvme_admin": false, 00:27:42.334 "nvme_io": false, 00:27:42.334 "nvme_io_md": false, 00:27:42.334 "write_zeroes": true, 00:27:42.334 "zcopy": false, 00:27:42.334 "get_zone_info": false, 00:27:42.334 "zone_management": false, 00:27:42.334 "zone_append": false, 00:27:42.334 "compare": false, 00:27:42.334 "compare_and_write": false, 00:27:42.334 "abort": false, 00:27:42.334 "seek_hole": true, 00:27:42.334 "seek_data": true, 00:27:42.334 "copy": false, 00:27:42.334 "nvme_iov_md": false 00:27:42.334 }, 00:27:42.334 "driver_specific": { 00:27:42.334 "lvol": { 00:27:42.334 "lvol_store_uuid": "454451de-6c76-40af-9c09-a25c17777091", 00:27:42.334 "base_bdev": "nvme0n1", 00:27:42.334 "thin_provision": true, 00:27:42.334 "num_allocated_clusters": 0, 00:27:42.334 "snapshot": false, 00:27:42.334 "clone": false, 00:27:42.334 "esnap_clone": false 00:27:42.334 } 00:27:42.334 } 00:27:42.334 } 00:27:42.334 ]' 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:27:42.334 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:42.593 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:42.593 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:42.593 09:41:42 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:27:42.593 09:41:42 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:27:42.593 09:41:42 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:42.593 09:41:43 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:27:42.593 09:41:43 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:27:42.593 09:41:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.593 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.593 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:27:42.593 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:27:42.593 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:27:42.593 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c22ca81c-40e5-42c9-a3b7-2754cea0fde2 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:27:42.853 { 00:27:42.853 "name": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:42.853 "aliases": [ 00:27:42.853 "lvs/nvme0n1p0" 00:27:42.853 ], 00:27:42.853 "product_name": "Logical Volume", 00:27:42.853 "block_size": 4096, 00:27:42.853 "num_blocks": 26476544, 00:27:42.853 "uuid": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:42.853 "assigned_rate_limits": { 00:27:42.853 "rw_ios_per_sec": 0, 00:27:42.853 "rw_mbytes_per_sec": 0, 00:27:42.853 "r_mbytes_per_sec": 0, 00:27:42.853 "w_mbytes_per_sec": 0 00:27:42.853 }, 00:27:42.853 "claimed": false, 00:27:42.853 "zoned": false, 00:27:42.853 "supported_io_types": { 00:27:42.853 "read": true, 00:27:42.853 "write": true, 00:27:42.853 "unmap": true, 00:27:42.853 "flush": false, 00:27:42.853 "reset": true, 00:27:42.853 "nvme_admin": false, 00:27:42.853 "nvme_io": false, 00:27:42.853 "nvme_io_md": false, 00:27:42.853 "write_zeroes": true, 00:27:42.853 "zcopy": false, 00:27:42.853 "get_zone_info": false, 00:27:42.853 "zone_management": false, 00:27:42.853 "zone_append": false, 00:27:42.853 "compare": false, 00:27:42.853 "compare_and_write": false, 00:27:42.853 "abort": false, 00:27:42.853 "seek_hole": true, 00:27:42.853 "seek_data": true, 00:27:42.853 "copy": false, 00:27:42.853 "nvme_iov_md": false 00:27:42.853 }, 00:27:42.853 "driver_specific": { 00:27:42.853 "lvol": { 00:27:42.853 "lvol_store_uuid": "454451de-6c76-40af-9c09-a25c17777091", 00:27:42.853 "base_bdev": "nvme0n1", 00:27:42.853 "thin_provision": true, 00:27:42.853 "num_allocated_clusters": 0, 00:27:42.853 "snapshot": false, 00:27:42.853 "clone": false, 00:27:42.853 "esnap_clone": false 00:27:42.853 } 00:27:42.853 } 00:27:42.853 } 00:27:42.853 ]' 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:27:42.853 09:41:43 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:27:42.853 09:41:43 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:27:42.853 09:41:43 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c22ca81c-40e5-42c9-a3b7-2754cea0fde2 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:27:43.112 [2024-07-25 09:41:43.602836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.112 [2024-07-25 09:41:43.602886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:43.112 [2024-07-25 09:41:43.602897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:43.112 [2024-07-25 09:41:43.602923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.112 [2024-07-25 09:41:43.605821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.112 [2024-07-25 09:41:43.605861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:43.112 [2024-07-25 09:41:43.605870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:27:43.112 [2024-07-25 09:41:43.605879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.112 [2024-07-25 09:41:43.606012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:43.112 [2024-07-25 09:41:43.607027] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:43.112 [2024-07-25 09:41:43.607058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.112 [2024-07-25 09:41:43.607070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:43.112 [2024-07-25 09:41:43.607078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:27:43.112 [2024-07-25 09:41:43.607090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.112 [2024-07-25 09:41:43.607205] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:27:43.112 [2024-07-25 09:41:43.608621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.112 [2024-07-25 09:41:43.608654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:43.112 [2024-07-25 09:41:43.608666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:43.112 [2024-07-25 09:41:43.608674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.112 [2024-07-25 09:41:43.616057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.112 [2024-07-25 09:41:43.616089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:43.112 [2024-07-25 09:41:43.616100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.239 ms 00:27:43.112 [2024-07-25 09:41:43.616107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.112 [2024-07-25 09:41:43.616296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.112 [2024-07-25 09:41:43.616311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:43.112 [2024-07-25 09:41:43.616323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:27:43.112 [2024-07-25 09:41:43.616330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.112 [2024-07-25 09:41:43.616409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.113 [2024-07-25 09:41:43.616419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:43.113 [2024-07-25 09:41:43.616430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:43.113 [2024-07-25 09:41:43.616437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.113 [2024-07-25 09:41:43.616502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:43.113 [2024-07-25 09:41:43.622045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.113 [2024-07-25 09:41:43.622077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:43.113 [2024-07-25 09:41:43.622086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.562 ms 00:27:43.113 [2024-07-25 09:41:43.622111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.113 [2024-07-25 09:41:43.622207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.113 [2024-07-25 09:41:43.622219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:43.113 [2024-07-25 09:41:43.622228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:43.113 [2024-07-25 09:41:43.622236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.113 [2024-07-25 09:41:43.622301] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:43.113 [2024-07-25 09:41:43.622422] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:43.113 [2024-07-25 09:41:43.622435] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:43.113 [2024-07-25 09:41:43.622450] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:43.113 [2024-07-25 09:41:43.622461] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:43.113 [2024-07-25 09:41:43.622471] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:43.113 [2024-07-25 09:41:43.622481] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:43.113 [2024-07-25 09:41:43.622491] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:43.113 [2024-07-25 09:41:43.622499] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:43.113 [2024-07-25 09:41:43.622526] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:43.113 [2024-07-25 09:41:43.622535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.113 [2024-07-25 09:41:43.622544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:43.113 [2024-07-25 09:41:43.622552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:27:43.113 [2024-07-25 09:41:43.622560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.113 [2024-07-25 09:41:43.622668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.113 [2024-07-25 09:41:43.622680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:43.113 [2024-07-25 09:41:43.622687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:43.113 [2024-07-25 09:41:43.622698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.113 [2024-07-25 09:41:43.622871] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:43.113 [2024-07-25 09:41:43.622890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:43.113 [2024-07-25 09:41:43.622898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:43.113 [2024-07-25 09:41:43.622907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.622914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:43.113 [2024-07-25 09:41:43.622922] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.622929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:43.113 [2024-07-25 09:41:43.622937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:43.113 [2024-07-25 09:41:43.622943] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:43.113 [2024-07-25 09:41:43.622952] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:43.113 [2024-07-25 09:41:43.622958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:43.113 [2024-07-25 09:41:43.622966] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:43.113 [2024-07-25 09:41:43.622972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:43.113 [2024-07-25 09:41:43.622982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:43.113 [2024-07-25 09:41:43.622990] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:43.113 [2024-07-25 09:41:43.622998] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:43.113 [2024-07-25 09:41:43.623014] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:43.113 [2024-07-25 09:41:43.623036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:43.113 [2024-07-25 09:41:43.623058] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:43.113 [2024-07-25 09:41:43.623078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:43.113 [2024-07-25 09:41:43.623099] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623105] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:43.113 [2024-07-25 09:41:43.623119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:43.113 [2024-07-25 09:41:43.623133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:43.113 [2024-07-25 09:41:43.623141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:43.113 [2024-07-25 09:41:43.623147] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:43.113 [2024-07-25 09:41:43.623154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:43.113 [2024-07-25 09:41:43.623160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:43.113 [2024-07-25 09:41:43.623169] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:43.113 [2024-07-25 09:41:43.623182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:43.113 [2024-07-25 09:41:43.623187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623194] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:43.113 [2024-07-25 09:41:43.623201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:43.113 [2024-07-25 09:41:43.623211] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:43.113 [2024-07-25 09:41:43.623238] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:43.113 [2024-07-25 09:41:43.623246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:43.113 [2024-07-25 09:41:43.623256] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:43.113 [2024-07-25 09:41:43.623263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:43.113 [2024-07-25 09:41:43.623271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:43.113 [2024-07-25 09:41:43.623277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:43.113 [2024-07-25 09:41:43.623289] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:43.113 [2024-07-25 09:41:43.623299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:43.113 [2024-07-25 09:41:43.623315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:43.113 [2024-07-25 09:41:43.623324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:43.113 [2024-07-25 09:41:43.623331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:43.113 [2024-07-25 09:41:43.623339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:43.113 [2024-07-25 09:41:43.623346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:43.113 [2024-07-25 09:41:43.623354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:43.113 [2024-07-25 09:41:43.623362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:43.113 [2024-07-25 09:41:43.623371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:43.113 [2024-07-25 09:41:43.623380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:43.113 [2024-07-25 09:41:43.623418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:43.113 [2024-07-25 09:41:43.623425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:43.113 [2024-07-25 09:41:43.623442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:43.113 [2024-07-25 09:41:43.623452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:43.113 [2024-07-25 09:41:43.623460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:43.113 [2024-07-25 09:41:43.623468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:43.113 [2024-07-25 09:41:43.623475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:43.113 [2024-07-25 09:41:43.623485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:27:43.113 [2024-07-25 09:41:43.623492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:43.113 [2024-07-25 09:41:43.623660] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:43.113 [2024-07-25 09:41:43.623676] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:47.305 [2024-07-25 09:41:47.787785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.787842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:47.305 [2024-07-25 09:41:47.787859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4172.136 ms 00:27:47.305 [2024-07-25 09:41:47.787867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.829157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.829207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:47.305 [2024-07-25 09:41:47.829221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.012 ms 00:27:47.305 [2024-07-25 09:41:47.829237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.829445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.829461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:47.305 [2024-07-25 09:41:47.829474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:47.305 [2024-07-25 09:41:47.829481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.889032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.889078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:47.305 [2024-07-25 09:41:47.889093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.569 ms 00:27:47.305 [2024-07-25 09:41:47.889103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.889245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.889258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:47.305 [2024-07-25 09:41:47.889275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:47.305 [2024-07-25 09:41:47.889284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.889769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.889789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:47.305 [2024-07-25 09:41:47.889802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:27:47.305 [2024-07-25 09:41:47.889812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.889960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.889977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:47.305 [2024-07-25 09:41:47.889990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:47.305 [2024-07-25 09:41:47.890000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.305 [2024-07-25 09:41:47.913083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.305 [2024-07-25 09:41:47.913135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:47.305 [2024-07-25 09:41:47.913148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.067 ms 00:27:47.305 [2024-07-25 09:41:47.913155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.565 [2024-07-25 09:41:47.926545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:47.565 [2024-07-25 09:41:47.942643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.565 [2024-07-25 09:41:47.942706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:47.565 [2024-07-25 09:41:47.942733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.366 ms 00:27:47.565 [2024-07-25 09:41:47.942742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.565 [2024-07-25 09:41:48.060955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.565 [2024-07-25 09:41:48.061028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:47.565 [2024-07-25 09:41:48.061044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 118.278 ms 00:27:47.565 [2024-07-25 09:41:48.061055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.565 [2024-07-25 09:41:48.061339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.566 [2024-07-25 09:41:48.061362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:47.566 [2024-07-25 09:41:48.061372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.175 ms 00:27:47.566 [2024-07-25 09:41:48.061386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.566 [2024-07-25 09:41:48.097819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.566 [2024-07-25 09:41:48.097857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:47.566 [2024-07-25 09:41:48.097867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.404 ms 00:27:47.566 [2024-07-25 09:41:48.097892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.566 [2024-07-25 09:41:48.133652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.566 [2024-07-25 09:41:48.133687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:47.566 [2024-07-25 09:41:48.133696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.720 ms 00:27:47.566 [2024-07-25 09:41:48.133720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.566 [2024-07-25 09:41:48.134631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.566 [2024-07-25 09:41:48.134661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:47.566 [2024-07-25 09:41:48.134672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:27:47.566 [2024-07-25 09:41:48.134681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.248579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.827 [2024-07-25 09:41:48.248626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:47.827 [2024-07-25 09:41:48.248639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.061 ms 00:27:47.827 [2024-07-25 09:41:48.248650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.286020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.827 [2024-07-25 09:41:48.286062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:47.827 [2024-07-25 09:41:48.286075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.323 ms 00:27:47.827 [2024-07-25 09:41:48.286084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.323236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.827 [2024-07-25 09:41:48.323289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:47.827 [2024-07-25 09:41:48.323299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.134 ms 00:27:47.827 [2024-07-25 09:41:48.323307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.360155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.827 [2024-07-25 09:41:48.360206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:47.827 [2024-07-25 09:41:48.360218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.825 ms 00:27:47.827 [2024-07-25 09:41:48.360226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.360353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.827 [2024-07-25 09:41:48.360366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:47.827 [2024-07-25 09:41:48.360374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:47.827 [2024-07-25 09:41:48.360385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.360491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:47.827 [2024-07-25 09:41:48.360502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:47.827 [2024-07-25 09:41:48.360510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:47.827 [2024-07-25 09:41:48.360538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:47.827 [2024-07-25 09:41:48.361666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:47.827 [2024-07-25 09:41:48.366723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4767.698 ms, result 0 00:27:47.827 [2024-07-25 09:41:48.367846] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:47.827 { 00:27:47.827 "name": "ftl0", 00:27:47.827 "uuid": "1c8aeb57-7dc0-42c8-bc02-351163259a4d" 00:27:47.827 } 00:27:47.827 09:41:48 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:27:47.827 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:27:47.827 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:27:47.827 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:27:47.827 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:27:47.827 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:27:47.827 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:48.086 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:27:48.345 [ 00:27:48.345 { 00:27:48.345 "name": "ftl0", 00:27:48.345 "aliases": [ 00:27:48.345 "1c8aeb57-7dc0-42c8-bc02-351163259a4d" 00:27:48.345 ], 00:27:48.345 "product_name": "FTL disk", 00:27:48.345 "block_size": 4096, 00:27:48.345 "num_blocks": 23592960, 00:27:48.345 "uuid": "1c8aeb57-7dc0-42c8-bc02-351163259a4d", 00:27:48.345 "assigned_rate_limits": { 00:27:48.345 "rw_ios_per_sec": 0, 00:27:48.345 "rw_mbytes_per_sec": 0, 00:27:48.345 "r_mbytes_per_sec": 0, 00:27:48.345 "w_mbytes_per_sec": 0 00:27:48.345 }, 00:27:48.345 "claimed": false, 00:27:48.345 "zoned": false, 00:27:48.345 "supported_io_types": { 00:27:48.345 "read": true, 00:27:48.345 "write": true, 00:27:48.345 "unmap": true, 00:27:48.345 "flush": true, 00:27:48.345 "reset": false, 00:27:48.345 "nvme_admin": false, 00:27:48.345 "nvme_io": false, 00:27:48.345 "nvme_io_md": false, 00:27:48.345 "write_zeroes": true, 00:27:48.345 "zcopy": false, 00:27:48.345 "get_zone_info": false, 00:27:48.345 "zone_management": false, 00:27:48.345 "zone_append": false, 00:27:48.345 "compare": false, 00:27:48.345 "compare_and_write": false, 00:27:48.345 "abort": false, 00:27:48.345 "seek_hole": false, 00:27:48.345 "seek_data": false, 00:27:48.345 "copy": false, 00:27:48.345 "nvme_iov_md": false 00:27:48.345 }, 00:27:48.345 "driver_specific": { 00:27:48.345 "ftl": { 00:27:48.345 "base_bdev": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:48.345 "cache": "nvc0n1p0" 00:27:48.345 } 00:27:48.345 } 00:27:48.345 } 00:27:48.345 ] 00:27:48.345 09:41:48 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:27:48.345 09:41:48 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:27:48.345 09:41:48 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:48.345 09:41:48 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:27:48.345 09:41:48 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:27:48.604 09:41:49 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:27:48.604 { 00:27:48.604 "name": "ftl0", 00:27:48.604 "aliases": [ 00:27:48.604 "1c8aeb57-7dc0-42c8-bc02-351163259a4d" 00:27:48.604 ], 00:27:48.604 "product_name": "FTL disk", 00:27:48.604 "block_size": 4096, 00:27:48.604 "num_blocks": 23592960, 00:27:48.604 "uuid": "1c8aeb57-7dc0-42c8-bc02-351163259a4d", 00:27:48.604 "assigned_rate_limits": { 00:27:48.604 "rw_ios_per_sec": 0, 00:27:48.604 "rw_mbytes_per_sec": 0, 00:27:48.604 "r_mbytes_per_sec": 0, 00:27:48.604 "w_mbytes_per_sec": 0 00:27:48.604 }, 00:27:48.604 "claimed": false, 00:27:48.604 "zoned": false, 00:27:48.604 "supported_io_types": { 00:27:48.604 "read": true, 00:27:48.604 "write": true, 00:27:48.604 "unmap": true, 00:27:48.604 "flush": true, 00:27:48.604 "reset": false, 00:27:48.604 "nvme_admin": false, 00:27:48.604 "nvme_io": false, 00:27:48.604 "nvme_io_md": false, 00:27:48.604 "write_zeroes": true, 00:27:48.604 "zcopy": false, 00:27:48.604 "get_zone_info": false, 00:27:48.604 "zone_management": false, 00:27:48.604 "zone_append": false, 00:27:48.604 "compare": false, 00:27:48.604 "compare_and_write": false, 00:27:48.604 "abort": false, 00:27:48.604 "seek_hole": false, 00:27:48.604 "seek_data": false, 00:27:48.604 "copy": false, 00:27:48.604 "nvme_iov_md": false 00:27:48.604 }, 00:27:48.604 "driver_specific": { 00:27:48.604 "ftl": { 00:27:48.604 "base_bdev": "c22ca81c-40e5-42c9-a3b7-2754cea0fde2", 00:27:48.604 "cache": "nvc0n1p0" 00:27:48.604 } 00:27:48.604 } 00:27:48.604 } 00:27:48.604 ]' 00:27:48.605 09:41:49 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:27:48.605 09:41:49 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:27:48.605 09:41:49 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:48.863 [2024-07-25 09:41:49.342211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.342269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:48.863 [2024-07-25 09:41:49.342283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:48.863 [2024-07-25 09:41:49.342290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.342361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:27:48.863 [2024-07-25 09:41:49.346050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.346084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:48.863 [2024-07-25 09:41:49.346102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.681 ms 00:27:48.863 [2024-07-25 09:41:49.346114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.347229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.347262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:48.863 [2024-07-25 09:41:49.347272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:27:48.863 [2024-07-25 09:41:49.347301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.350060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.350083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:48.863 [2024-07-25 09:41:49.350105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.705 ms 00:27:48.863 [2024-07-25 09:41:49.350113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.355480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.355513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:48.863 [2024-07-25 09:41:49.355522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.325 ms 00:27:48.863 [2024-07-25 09:41:49.355530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.392320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.392360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:48.863 [2024-07-25 09:41:49.392379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.707 ms 00:27:48.863 [2024-07-25 09:41:49.392390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.415091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.415132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:48.863 [2024-07-25 09:41:49.415145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.638 ms 00:27:48.863 [2024-07-25 09:41:49.415154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.415505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.415523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:48.863 [2024-07-25 09:41:49.415532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:27:48.863 [2024-07-25 09:41:49.415541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:48.863 [2024-07-25 09:41:49.451112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:48.863 [2024-07-25 09:41:49.451147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:27:48.863 [2024-07-25 09:41:49.451157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.586 ms 00:27:48.863 [2024-07-25 09:41:49.451165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.122 [2024-07-25 09:41:49.487326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.122 [2024-07-25 09:41:49.487361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:27:49.122 [2024-07-25 09:41:49.487370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.122 ms 00:27:49.122 [2024-07-25 09:41:49.487381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.122 [2024-07-25 09:41:49.522657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.122 [2024-07-25 09:41:49.522693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:49.122 [2024-07-25 09:41:49.522703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.246 ms 00:27:49.122 [2024-07-25 09:41:49.522711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.122 [2024-07-25 09:41:49.557861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.122 [2024-07-25 09:41:49.557896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:49.122 [2024-07-25 09:41:49.557905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.029 ms 00:27:49.122 [2024-07-25 09:41:49.557912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.122 [2024-07-25 09:41:49.558011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:49.122 [2024-07-25 09:41:49.558027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:49.122 [2024-07-25 09:41:49.558901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:49.123 [2024-07-25 09:41:49.558911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:49.123 [2024-07-25 09:41:49.558927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:49.123 [2024-07-25 09:41:49.558933] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:27:49.123 [2024-07-25 09:41:49.558945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:49.123 [2024-07-25 09:41:49.558955] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:49.123 [2024-07-25 09:41:49.558963] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:49.123 [2024-07-25 09:41:49.558971] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:49.123 [2024-07-25 09:41:49.558978] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:49.123 [2024-07-25 09:41:49.558989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:49.123 [2024-07-25 09:41:49.558997] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:49.123 [2024-07-25 09:41:49.559003] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:49.123 [2024-07-25 09:41:49.559011] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:49.123 [2024-07-25 09:41:49.559018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.123 [2024-07-25 09:41:49.559027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:49.123 [2024-07-25 09:41:49.559034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:27:49.123 [2024-07-25 09:41:49.559042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.123 [2024-07-25 09:41:49.579090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.123 [2024-07-25 09:41:49.579126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:49.123 [2024-07-25 09:41:49.579136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.039 ms 00:27:49.123 [2024-07-25 09:41:49.579147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.123 [2024-07-25 09:41:49.579797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:49.123 [2024-07-25 09:41:49.579824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:49.123 [2024-07-25 09:41:49.579834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:27:49.123 [2024-07-25 09:41:49.579843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.123 [2024-07-25 09:41:49.647902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.123 [2024-07-25 09:41:49.647946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:49.123 [2024-07-25 09:41:49.647957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.123 [2024-07-25 09:41:49.647966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.123 [2024-07-25 09:41:49.648101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.123 [2024-07-25 09:41:49.648113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:49.123 [2024-07-25 09:41:49.648122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.123 [2024-07-25 09:41:49.648130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.123 [2024-07-25 09:41:49.648220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.123 [2024-07-25 09:41:49.648246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:49.123 [2024-07-25 09:41:49.648255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.123 [2024-07-25 09:41:49.648266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.123 [2024-07-25 09:41:49.648335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.123 [2024-07-25 09:41:49.648346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:49.123 [2024-07-25 09:41:49.648353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.123 [2024-07-25 09:41:49.648363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.774008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.774060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:49.381 [2024-07-25 09:41:49.774074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.774083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.876981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:49.381 [2024-07-25 09:41:49.877052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.877203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:49.381 [2024-07-25 09:41:49.877242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.877348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:49.381 [2024-07-25 09:41:49.877367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.877517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:49.381 [2024-07-25 09:41:49.877564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.877657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:49.381 [2024-07-25 09:41:49.877687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.877766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:49.381 [2024-07-25 09:41:49.877788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.877869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:49.381 [2024-07-25 09:41:49.877882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:49.381 [2024-07-25 09:41:49.877889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:49.381 [2024-07-25 09:41:49.877898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:49.381 [2024-07-25 09:41:49.878160] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 536.980 ms, result 0 00:27:49.381 true 00:27:49.381 09:41:49 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 80053 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80053 ']' 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80053 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80053 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80053' 00:27:49.381 killing process with pid 80053 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 80053 00:27:49.381 09:41:49 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 80053 00:27:57.540 09:41:56 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:27:57.540 65536+0 records in 00:27:57.540 65536+0 records out 00:27:57.540 268435456 bytes (268 MB, 256 MiB) copied, 0.83854 s, 320 MB/s 00:27:57.540 09:41:57 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:57.540 [2024-07-25 09:41:57.917604] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:57.540 [2024-07-25 09:41:57.917710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80301 ] 00:27:57.540 [2024-07-25 09:41:58.077372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.799 [2024-07-25 09:41:58.304371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.370 [2024-07-25 09:41:58.702640] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.370 [2024-07-25 09:41:58.702707] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:58.370 [2024-07-25 09:41:58.860192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.860264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:58.370 [2024-07-25 09:41:58.860280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:58.370 [2024-07-25 09:41:58.860288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.863161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.863197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:58.370 [2024-07-25 09:41:58.863208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.860 ms 00:27:58.370 [2024-07-25 09:41:58.863215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.863309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:58.370 [2024-07-25 09:41:58.864425] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:58.370 [2024-07-25 09:41:58.864459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.864468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:58.370 [2024-07-25 09:41:58.864477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.159 ms 00:27:58.370 [2024-07-25 09:41:58.864484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.865928] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:58.370 [2024-07-25 09:41:58.886266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.886302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:58.370 [2024-07-25 09:41:58.886318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.378 ms 00:27:58.370 [2024-07-25 09:41:58.886326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.886415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.886427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:58.370 [2024-07-25 09:41:58.886435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:27:58.370 [2024-07-25 09:41:58.886442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.893265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.893290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:58.370 [2024-07-25 09:41:58.893299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.798 ms 00:27:58.370 [2024-07-25 09:41:58.893306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.893391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.893404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:58.370 [2024-07-25 09:41:58.893413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:27:58.370 [2024-07-25 09:41:58.893420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.893451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.893460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:58.370 [2024-07-25 09:41:58.893469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:58.370 [2024-07-25 09:41:58.893476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.893497] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:27:58.370 [2024-07-25 09:41:58.899067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.899096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:58.370 [2024-07-25 09:41:58.899105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.587 ms 00:27:58.370 [2024-07-25 09:41:58.899113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.899173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.899184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:58.370 [2024-07-25 09:41:58.899194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:58.370 [2024-07-25 09:41:58.899201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.899221] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:58.370 [2024-07-25 09:41:58.899268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:58.370 [2024-07-25 09:41:58.899303] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:58.370 [2024-07-25 09:41:58.899317] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:58.370 [2024-07-25 09:41:58.899414] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:58.370 [2024-07-25 09:41:58.899436] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:58.370 [2024-07-25 09:41:58.899447] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:58.370 [2024-07-25 09:41:58.899457] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:58.370 [2024-07-25 09:41:58.899466] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:58.370 [2024-07-25 09:41:58.899478] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:27:58.370 [2024-07-25 09:41:58.899485] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:58.370 [2024-07-25 09:41:58.899494] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:58.370 [2024-07-25 09:41:58.899502] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:58.370 [2024-07-25 09:41:58.899510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.899518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:58.370 [2024-07-25 09:41:58.899526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:27:58.370 [2024-07-25 09:41:58.899534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.899606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.370 [2024-07-25 09:41:58.899616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:58.370 [2024-07-25 09:41:58.899626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:58.370 [2024-07-25 09:41:58.899649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.370 [2024-07-25 09:41:58.899735] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:58.370 [2024-07-25 09:41:58.899745] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:58.370 [2024-07-25 09:41:58.899753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.370 [2024-07-25 09:41:58.899770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.370 [2024-07-25 09:41:58.899778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:58.370 [2024-07-25 09:41:58.899786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:58.370 [2024-07-25 09:41:58.899792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:27:58.370 [2024-07-25 09:41:58.899799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:58.370 [2024-07-25 09:41:58.899806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.371 [2024-07-25 09:41:58.899822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:58.371 [2024-07-25 09:41:58.899829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:27:58.371 [2024-07-25 09:41:58.899837] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:58.371 [2024-07-25 09:41:58.899844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:58.371 [2024-07-25 09:41:58.899851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:27:58.371 [2024-07-25 09:41:58.899858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:58.371 [2024-07-25 09:41:58.899873] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:27:58.371 [2024-07-25 09:41:58.899893] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899901] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:58.371 [2024-07-25 09:41:58.899909] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.371 [2024-07-25 09:41:58.899922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:58.371 [2024-07-25 09:41:58.899928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.371 [2024-07-25 09:41:58.899941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:58.371 [2024-07-25 09:41:58.899949] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899955] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.371 [2024-07-25 09:41:58.899961] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:58.371 [2024-07-25 09:41:58.899967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899974] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:58.371 [2024-07-25 09:41:58.899980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:58.371 [2024-07-25 09:41:58.899987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:27:58.371 [2024-07-25 09:41:58.899993] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.371 [2024-07-25 09:41:58.900000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:58.371 [2024-07-25 09:41:58.900007] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:27:58.371 [2024-07-25 09:41:58.900013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:58.371 [2024-07-25 09:41:58.900018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:58.371 [2024-07-25 09:41:58.900024] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:27:58.371 [2024-07-25 09:41:58.900031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.371 [2024-07-25 09:41:58.900038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:58.371 [2024-07-25 09:41:58.900045] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:27:58.371 [2024-07-25 09:41:58.900051] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.371 [2024-07-25 09:41:58.900057] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:58.371 [2024-07-25 09:41:58.900064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:58.371 [2024-07-25 09:41:58.900071] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:58.371 [2024-07-25 09:41:58.900077] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:58.371 [2024-07-25 09:41:58.900088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:58.371 [2024-07-25 09:41:58.900095] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:58.371 [2024-07-25 09:41:58.900102] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:58.371 [2024-07-25 09:41:58.900109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:58.371 [2024-07-25 09:41:58.900115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:58.371 [2024-07-25 09:41:58.900122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:58.371 [2024-07-25 09:41:58.900130] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:58.371 [2024-07-25 09:41:58.900140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:27:58.371 [2024-07-25 09:41:58.900157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:27:58.371 [2024-07-25 09:41:58.900164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:27:58.371 [2024-07-25 09:41:58.900172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:27:58.371 [2024-07-25 09:41:58.900179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:27:58.371 [2024-07-25 09:41:58.900186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:27:58.371 [2024-07-25 09:41:58.900192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:27:58.371 [2024-07-25 09:41:58.900199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:27:58.371 [2024-07-25 09:41:58.900206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:27:58.371 [2024-07-25 09:41:58.900214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:27:58.371 [2024-07-25 09:41:58.900261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:58.371 [2024-07-25 09:41:58.900270] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:58.371 [2024-07-25 09:41:58.900285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:58.371 [2024-07-25 09:41:58.900293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:58.371 [2024-07-25 09:41:58.900300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:58.371 [2024-07-25 09:41:58.900308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.371 [2024-07-25 09:41:58.900316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:58.371 [2024-07-25 09:41:58.900323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:27:58.371 [2024-07-25 09:41:58.900331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.371 [2024-07-25 09:41:58.953725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.371 [2024-07-25 09:41:58.953763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:58.371 [2024-07-25 09:41:58.953779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.441 ms 00:27:58.371 [2024-07-25 09:41:58.953787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.371 [2024-07-25 09:41:58.953940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.371 [2024-07-25 09:41:58.953954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:58.371 [2024-07-25 09:41:58.953963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:27:58.371 [2024-07-25 09:41:58.953970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.631 [2024-07-25 09:41:59.005656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.631 [2024-07-25 09:41:59.005693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:58.631 [2024-07-25 09:41:59.005704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.764 ms 00:27:58.631 [2024-07-25 09:41:59.005714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.631 [2024-07-25 09:41:59.005789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.631 [2024-07-25 09:41:59.005799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:58.631 [2024-07-25 09:41:59.005808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:58.631 [2024-07-25 09:41:59.005815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.631 [2024-07-25 09:41:59.006256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.631 [2024-07-25 09:41:59.006275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:58.632 [2024-07-25 09:41:59.006284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:27:58.632 [2024-07-25 09:41:59.006292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.006409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.006434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:58.632 [2024-07-25 09:41:59.006444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:58.632 [2024-07-25 09:41:59.006451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.026960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.026995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:58.632 [2024-07-25 09:41:59.027005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.526 ms 00:27:58.632 [2024-07-25 09:41:59.027012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.046553] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:58.632 [2024-07-25 09:41:59.046591] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:58.632 [2024-07-25 09:41:59.046603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.046612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:58.632 [2024-07-25 09:41:59.046621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.509 ms 00:27:58.632 [2024-07-25 09:41:59.046628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.076020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.076073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:58.632 [2024-07-25 09:41:59.076102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.378 ms 00:27:58.632 [2024-07-25 09:41:59.076109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.094510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.094546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:58.632 [2024-07-25 09:41:59.094557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.348 ms 00:27:58.632 [2024-07-25 09:41:59.094564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.113171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.113204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:58.632 [2024-07-25 09:41:59.113214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.576 ms 00:27:58.632 [2024-07-25 09:41:59.113221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.114018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.114052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:58.632 [2024-07-25 09:41:59.114062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:27:58.632 [2024-07-25 09:41:59.114070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.205205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.205286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:58.632 [2024-07-25 09:41:59.205300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.282 ms 00:27:58.632 [2024-07-25 09:41:59.205308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.217235] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:27:58.632 [2024-07-25 09:41:59.233493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.233553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:58.632 [2024-07-25 09:41:59.233565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.106 ms 00:27:58.632 [2024-07-25 09:41:59.233574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.233691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.233703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:58.632 [2024-07-25 09:41:59.233715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:58.632 [2024-07-25 09:41:59.233723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.233779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.233789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:58.632 [2024-07-25 09:41:59.233796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:58.632 [2024-07-25 09:41:59.233803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.233823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.233832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:58.632 [2024-07-25 09:41:59.233839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:58.632 [2024-07-25 09:41:59.233849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.632 [2024-07-25 09:41:59.233880] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:58.632 [2024-07-25 09:41:59.233890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.632 [2024-07-25 09:41:59.233897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:58.632 [2024-07-25 09:41:59.233907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:58.632 [2024-07-25 09:41:59.233914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.891 [2024-07-25 09:41:59.271593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.891 [2024-07-25 09:41:59.271633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:58.891 [2024-07-25 09:41:59.271649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.731 ms 00:27:58.891 [2024-07-25 09:41:59.271657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.891 [2024-07-25 09:41:59.271767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:58.891 [2024-07-25 09:41:59.271788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:58.891 [2024-07-25 09:41:59.271797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:58.891 [2024-07-25 09:41:59.271804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:58.891 [2024-07-25 09:41:59.272692] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:58.891 [2024-07-25 09:41:59.277711] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.017 ms, result 0 00:27:58.891 [2024-07-25 09:41:59.278546] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:58.891 [2024-07-25 09:41:59.296913] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:09.950  Copying: 23/256 [MB] (23 MBps) Copying: 45/256 [MB] (22 MBps) Copying: 69/256 [MB] (23 MBps) Copying: 91/256 [MB] (22 MBps) Copying: 114/256 [MB] (22 MBps) Copying: 136/256 [MB] (22 MBps) Copying: 158/256 [MB] (22 MBps) Copying: 180/256 [MB] (21 MBps) Copying: 204/256 [MB] (23 MBps) Copying: 227/256 [MB] (23 MBps) Copying: 250/256 [MB] (23 MBps) Copying: 256/256 [MB] (average 22 MBps)[2024-07-25 09:42:10.508593] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:09.950 [2024-07-25 09:42:10.524203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.950 [2024-07-25 09:42:10.524268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:09.950 [2024-07-25 09:42:10.524283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:09.950 [2024-07-25 09:42:10.524292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.950 [2024-07-25 09:42:10.524314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:09.950 [2024-07-25 09:42:10.528117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.950 [2024-07-25 09:42:10.528149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:09.950 [2024-07-25 09:42:10.528158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.797 ms 00:28:09.950 [2024-07-25 09:42:10.528182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.950 [2024-07-25 09:42:10.530259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.950 [2024-07-25 09:42:10.530292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:09.950 [2024-07-25 09:42:10.530319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.056 ms 00:28:09.950 [2024-07-25 09:42:10.530327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.950 [2024-07-25 09:42:10.536757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.950 [2024-07-25 09:42:10.536791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:09.950 [2024-07-25 09:42:10.536802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.426 ms 00:28:09.950 [2024-07-25 09:42:10.536816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:09.950 [2024-07-25 09:42:10.542542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:09.950 [2024-07-25 09:42:10.542572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:09.950 [2024-07-25 09:42:10.542581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.692 ms 00:28:09.950 [2024-07-25 09:42:10.542589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.210 [2024-07-25 09:42:10.581813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.210 [2024-07-25 09:42:10.581853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:10.210 [2024-07-25 09:42:10.581864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.251 ms 00:28:10.210 [2024-07-25 09:42:10.581871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.210 [2024-07-25 09:42:10.603776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.210 [2024-07-25 09:42:10.603821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:10.210 [2024-07-25 09:42:10.603833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.896 ms 00:28:10.210 [2024-07-25 09:42:10.603840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.210 [2024-07-25 09:42:10.603976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.210 [2024-07-25 09:42:10.603988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:10.210 [2024-07-25 09:42:10.603997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:28:10.210 [2024-07-25 09:42:10.604005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.210 [2024-07-25 09:42:10.641399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.211 [2024-07-25 09:42:10.641437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:10.211 [2024-07-25 09:42:10.641447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.449 ms 00:28:10.211 [2024-07-25 09:42:10.641455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.211 [2024-07-25 09:42:10.677954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.211 [2024-07-25 09:42:10.677990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:10.211 [2024-07-25 09:42:10.678001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.521 ms 00:28:10.211 [2024-07-25 09:42:10.678008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.211 [2024-07-25 09:42:10.716235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.211 [2024-07-25 09:42:10.716288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:10.211 [2024-07-25 09:42:10.716299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.247 ms 00:28:10.211 [2024-07-25 09:42:10.716307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.211 [2024-07-25 09:42:10.752695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.211 [2024-07-25 09:42:10.752734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:10.211 [2024-07-25 09:42:10.752746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.383 ms 00:28:10.211 [2024-07-25 09:42:10.752753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.211 [2024-07-25 09:42:10.752802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:10.211 [2024-07-25 09:42:10.752818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.752984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:10.211 [2024-07-25 09:42:10.753424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:10.212 [2024-07-25 09:42:10.753627] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:10.212 [2024-07-25 09:42:10.753634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:28:10.212 [2024-07-25 09:42:10.753642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:10.212 [2024-07-25 09:42:10.753650] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:10.212 [2024-07-25 09:42:10.753658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:10.212 [2024-07-25 09:42:10.753679] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:10.212 [2024-07-25 09:42:10.753686] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:10.212 [2024-07-25 09:42:10.753694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:10.212 [2024-07-25 09:42:10.753701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:10.212 [2024-07-25 09:42:10.753708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:10.212 [2024-07-25 09:42:10.753715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:10.212 [2024-07-25 09:42:10.753723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.212 [2024-07-25 09:42:10.753730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:10.212 [2024-07-25 09:42:10.753739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:28:10.212 [2024-07-25 09:42:10.753750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.212 [2024-07-25 09:42:10.774063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.212 [2024-07-25 09:42:10.774096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:10.212 [2024-07-25 09:42:10.774121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.333 ms 00:28:10.212 [2024-07-25 09:42:10.774129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.212 [2024-07-25 09:42:10.774643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:10.212 [2024-07-25 09:42:10.774660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:10.212 [2024-07-25 09:42:10.774675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:28:10.212 [2024-07-25 09:42:10.774682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:10.824178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:10.824217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:10.472 [2024-07-25 09:42:10.824236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:10.824245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:10.824341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:10.824352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:10.472 [2024-07-25 09:42:10.824362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:10.824369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:10.824415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:10.824426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:10.472 [2024-07-25 09:42:10.824434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:10.824441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:10.824465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:10.824476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:10.472 [2024-07-25 09:42:10.824483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:10.824493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:10.943775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:10.943841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:10.472 [2024-07-25 09:42:10.943854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:10.943862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:10.472 [2024-07-25 09:42:11.044158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:10.472 [2024-07-25 09:42:11.044288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:10.472 [2024-07-25 09:42:11.044338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:10.472 [2024-07-25 09:42:11.044468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:10.472 [2024-07-25 09:42:11.044526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:10.472 [2024-07-25 09:42:11.044588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:10.472 [2024-07-25 09:42:11.044646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:10.472 [2024-07-25 09:42:11.044653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:10.472 [2024-07-25 09:42:11.044660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:10.472 [2024-07-25 09:42:11.044798] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.585 ms, result 0 00:28:12.381 00:28:12.381 00:28:12.381 09:42:12 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=80455 00:28:12.381 09:42:12 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:12.381 09:42:12 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 80455 00:28:12.381 09:42:12 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 80455 ']' 00:28:12.381 09:42:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.381 09:42:12 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:12.381 09:42:12 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.381 09:42:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:12.381 09:42:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:12.381 [2024-07-25 09:42:12.747351] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:12.381 [2024-07-25 09:42:12.748036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80455 ] 00:28:12.381 [2024-07-25 09:42:12.918151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.640 [2024-07-25 09:42:13.137540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.578 09:42:14 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:13.578 09:42:14 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:28:13.578 09:42:14 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:13.838 [2024-07-25 09:42:14.225313] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.838 [2024-07-25 09:42:14.225370] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:13.838 [2024-07-25 09:42:14.400188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.838 [2024-07-25 09:42:14.400252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:13.838 [2024-07-25 09:42:14.400267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:13.838 [2024-07-25 09:42:14.400279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.838 [2024-07-25 09:42:14.403143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.838 [2024-07-25 09:42:14.403177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:13.838 [2024-07-25 09:42:14.403187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:28:13.838 [2024-07-25 09:42:14.403197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.403287] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:13.839 [2024-07-25 09:42:14.404535] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:13.839 [2024-07-25 09:42:14.404565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.404577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:13.839 [2024-07-25 09:42:14.404587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.288 ms 00:28:13.839 [2024-07-25 09:42:14.404601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.406198] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:13.839 [2024-07-25 09:42:14.426863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.426897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:13.839 [2024-07-25 09:42:14.426911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.700 ms 00:28:13.839 [2024-07-25 09:42:14.426919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.427013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.427026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:13.839 [2024-07-25 09:42:14.427036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:13.839 [2024-07-25 09:42:14.427043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.433905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.433932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:13.839 [2024-07-25 09:42:14.433947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.825 ms 00:28:13.839 [2024-07-25 09:42:14.433955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.434059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.434072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:13.839 [2024-07-25 09:42:14.434082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:13.839 [2024-07-25 09:42:14.434093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.434124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.434132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:13.839 [2024-07-25 09:42:14.434142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:13.839 [2024-07-25 09:42:14.434150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.434175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:13.839 [2024-07-25 09:42:14.439803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.439847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:13.839 [2024-07-25 09:42:14.439856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.647 ms 00:28:13.839 [2024-07-25 09:42:14.439866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.439928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.439943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:13.839 [2024-07-25 09:42:14.439954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:13.839 [2024-07-25 09:42:14.439963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.439985] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:13.839 [2024-07-25 09:42:14.440007] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:13.839 [2024-07-25 09:42:14.440051] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:13.839 [2024-07-25 09:42:14.440071] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:13.839 [2024-07-25 09:42:14.440157] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:13.839 [2024-07-25 09:42:14.440175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:13.839 [2024-07-25 09:42:14.440185] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:13.839 [2024-07-25 09:42:14.440197] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440206] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440216] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:13.839 [2024-07-25 09:42:14.440224] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:13.839 [2024-07-25 09:42:14.440246] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:13.839 [2024-07-25 09:42:14.440255] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:13.839 [2024-07-25 09:42:14.440267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.440276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:13.839 [2024-07-25 09:42:14.440285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:28:13.839 [2024-07-25 09:42:14.440295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.440371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.839 [2024-07-25 09:42:14.440380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:13.839 [2024-07-25 09:42:14.440390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:13.839 [2024-07-25 09:42:14.440397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:13.839 [2024-07-25 09:42:14.440493] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:13.839 [2024-07-25 09:42:14.440506] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:13.839 [2024-07-25 09:42:14.440516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440525] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:13.839 [2024-07-25 09:42:14.440545] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440555] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:13.839 [2024-07-25 09:42:14.440574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440581] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.839 [2024-07-25 09:42:14.440590] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:13.839 [2024-07-25 09:42:14.440597] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:13.839 [2024-07-25 09:42:14.440606] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:13.839 [2024-07-25 09:42:14.440613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:13.839 [2024-07-25 09:42:14.440621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:13.839 [2024-07-25 09:42:14.440628] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:13.839 [2024-07-25 09:42:14.440643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:13.839 [2024-07-25 09:42:14.440667] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440673] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:13.839 [2024-07-25 09:42:14.440688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:13.839 [2024-07-25 09:42:14.440714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440745] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:13.839 [2024-07-25 09:42:14.440764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:13.839 [2024-07-25 09:42:14.440792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.839 [2024-07-25 09:42:14.440808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:13.839 [2024-07-25 09:42:14.440816] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:13.839 [2024-07-25 09:42:14.440825] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:13.839 [2024-07-25 09:42:14.440833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:13.839 [2024-07-25 09:42:14.440842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:13.839 [2024-07-25 09:42:14.440850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440861] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:13.839 [2024-07-25 09:42:14.440870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:13.839 [2024-07-25 09:42:14.440879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440886] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:13.839 [2024-07-25 09:42:14.440897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:13.839 [2024-07-25 09:42:14.440905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440914] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:13.839 [2024-07-25 09:42:14.440922] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:13.839 [2024-07-25 09:42:14.440933] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:13.839 [2024-07-25 09:42:14.440941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:13.839 [2024-07-25 09:42:14.440951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:13.839 [2024-07-25 09:42:14.440958] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:13.839 [2024-07-25 09:42:14.440968] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:13.839 [2024-07-25 09:42:14.440977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:13.839 [2024-07-25 09:42:14.440989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:13.839 [2024-07-25 09:42:14.441022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:13.839 [2024-07-25 09:42:14.441031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:13.839 [2024-07-25 09:42:14.441040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:13.839 [2024-07-25 09:42:14.441059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:13.839 [2024-07-25 09:42:14.441068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:13.839 [2024-07-25 09:42:14.441075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:13.839 [2024-07-25 09:42:14.441084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:13.839 [2024-07-25 09:42:14.441090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:13.839 [2024-07-25 09:42:14.441099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:13.839 [2024-07-25 09:42:14.441138] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:13.839 [2024-07-25 09:42:14.441147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:13.839 [2024-07-25 09:42:14.441166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:13.840 [2024-07-25 09:42:14.441174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:13.840 [2024-07-25 09:42:14.441183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:13.840 [2024-07-25 09:42:14.441191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:13.840 [2024-07-25 09:42:14.441200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:13.840 [2024-07-25 09:42:14.441207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:28:13.840 [2024-07-25 09:42:14.441218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.486639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.486692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.099 [2024-07-25 09:42:14.486708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.441 ms 00:28:14.099 [2024-07-25 09:42:14.486720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.486861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.486878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:14.099 [2024-07-25 09:42:14.486887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:14.099 [2024-07-25 09:42:14.486900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.538694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.538736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.099 [2024-07-25 09:42:14.538748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.872 ms 00:28:14.099 [2024-07-25 09:42:14.538757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.538862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.538875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.099 [2024-07-25 09:42:14.538885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:14.099 [2024-07-25 09:42:14.538895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.539325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.539344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.099 [2024-07-25 09:42:14.539352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:28:14.099 [2024-07-25 09:42:14.539361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.539467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.539483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.099 [2024-07-25 09:42:14.539491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:14.099 [2024-07-25 09:42:14.539502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.561820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.561855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.099 [2024-07-25 09:42:14.561865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.338 ms 00:28:14.099 [2024-07-25 09:42:14.561875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.582901] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:14.099 [2024-07-25 09:42:14.582937] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:14.099 [2024-07-25 09:42:14.582952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.582962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:14.099 [2024-07-25 09:42:14.582972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.999 ms 00:28:14.099 [2024-07-25 09:42:14.582981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.099 [2024-07-25 09:42:14.613313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.099 [2024-07-25 09:42:14.613352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:14.100 [2024-07-25 09:42:14.613363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.310 ms 00:28:14.100 [2024-07-25 09:42:14.613374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.100 [2024-07-25 09:42:14.632827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.100 [2024-07-25 09:42:14.632865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:14.100 [2024-07-25 09:42:14.632888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.415 ms 00:28:14.100 [2024-07-25 09:42:14.632900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.100 [2024-07-25 09:42:14.651870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.100 [2024-07-25 09:42:14.651906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:14.100 [2024-07-25 09:42:14.651916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.937 ms 00:28:14.100 [2024-07-25 09:42:14.651926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.100 [2024-07-25 09:42:14.652832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.100 [2024-07-25 09:42:14.652860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:14.100 [2024-07-25 09:42:14.652871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.794 ms 00:28:14.100 [2024-07-25 09:42:14.652882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.756464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.359 [2024-07-25 09:42:14.756527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:14.359 [2024-07-25 09:42:14.756542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.752 ms 00:28:14.359 [2024-07-25 09:42:14.756552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.768510] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:14.359 [2024-07-25 09:42:14.784884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.359 [2024-07-25 09:42:14.784941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:14.359 [2024-07-25 09:42:14.784958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.238 ms 00:28:14.359 [2024-07-25 09:42:14.784966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.785077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.359 [2024-07-25 09:42:14.785088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:14.359 [2024-07-25 09:42:14.785099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:14.359 [2024-07-25 09:42:14.785108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.785164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.359 [2024-07-25 09:42:14.785172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:14.359 [2024-07-25 09:42:14.785185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:14.359 [2024-07-25 09:42:14.785193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.785217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.359 [2024-07-25 09:42:14.785248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:14.359 [2024-07-25 09:42:14.785259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:14.359 [2024-07-25 09:42:14.785267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.785303] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:14.359 [2024-07-25 09:42:14.785313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.359 [2024-07-25 09:42:14.785324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:14.359 [2024-07-25 09:42:14.785331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:14.359 [2024-07-25 09:42:14.785342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.359 [2024-07-25 09:42:14.823255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.360 [2024-07-25 09:42:14.823296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:14.360 [2024-07-25 09:42:14.823308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.954 ms 00:28:14.360 [2024-07-25 09:42:14.823317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.360 [2024-07-25 09:42:14.823420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.360 [2024-07-25 09:42:14.823436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:14.360 [2024-07-25 09:42:14.823447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:28:14.360 [2024-07-25 09:42:14.823457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.360 [2024-07-25 09:42:14.824392] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:14.360 [2024-07-25 09:42:14.829418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 424.711 ms, result 0 00:28:14.360 [2024-07-25 09:42:14.830551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:14.360 Some configs were skipped because the RPC state that can call them passed over. 00:28:14.360 09:42:14 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:14.619 [2024-07-25 09:42:15.054191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.619 [2024-07-25 09:42:15.054257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:14.619 [2024-07-25 09:42:15.054277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:28:14.619 [2024-07-25 09:42:15.054287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.619 [2024-07-25 09:42:15.054343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.813 ms, result 0 00:28:14.619 true 00:28:14.619 09:42:15 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:14.619 [2024-07-25 09:42:15.217697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.619 [2024-07-25 09:42:15.217753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:14.619 [2024-07-25 09:42:15.217766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:28:14.619 [2024-07-25 09:42:15.217776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.619 [2024-07-25 09:42:15.217811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.419 ms, result 0 00:28:14.619 true 00:28:14.878 09:42:15 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 80455 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80455 ']' 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80455 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80455 00:28:14.878 killing process with pid 80455 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80455' 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 80455 00:28:14.878 09:42:15 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 80455 00:28:15.814 [2024-07-25 09:42:16.415594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.814 [2024-07-25 09:42:16.415664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:15.814 [2024-07-25 09:42:16.415679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:15.814 [2024-07-25 09:42:16.415689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.814 [2024-07-25 09:42:16.415712] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:15.814 [2024-07-25 09:42:16.419662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.814 [2024-07-25 09:42:16.419695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:15.814 [2024-07-25 09:42:16.419721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.943 ms 00:28:15.814 [2024-07-25 09:42:16.419732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.814 [2024-07-25 09:42:16.419975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.814 [2024-07-25 09:42:16.420008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:15.814 [2024-07-25 09:42:16.420017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:28:15.814 [2024-07-25 09:42:16.420025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:15.814 [2024-07-25 09:42:16.423503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:15.814 [2024-07-25 09:42:16.423544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:15.814 [2024-07-25 09:42:16.423554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.468 ms 00:28:15.814 [2024-07-25 09:42:16.423563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.074 [2024-07-25 09:42:16.429429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.074 [2024-07-25 09:42:16.429463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:16.074 [2024-07-25 09:42:16.429472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.843 ms 00:28:16.074 [2024-07-25 09:42:16.429482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.074 [2024-07-25 09:42:16.444952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.074 [2024-07-25 09:42:16.444989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:16.074 [2024-07-25 09:42:16.445015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.450 ms 00:28:16.074 [2024-07-25 09:42:16.445026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.074 [2024-07-25 09:42:16.456190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.074 [2024-07-25 09:42:16.456236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:16.074 [2024-07-25 09:42:16.456246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.139 ms 00:28:16.074 [2024-07-25 09:42:16.456255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.074 [2024-07-25 09:42:16.456386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.074 [2024-07-25 09:42:16.456400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:16.074 [2024-07-25 09:42:16.456408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:28:16.074 [2024-07-25 09:42:16.456428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.074 [2024-07-25 09:42:16.471930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.074 [2024-07-25 09:42:16.471965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:16.074 [2024-07-25 09:42:16.471975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.515 ms 00:28:16.074 [2024-07-25 09:42:16.471983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.074 [2024-07-25 09:42:16.487509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.075 [2024-07-25 09:42:16.487543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:16.075 [2024-07-25 09:42:16.487552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.524 ms 00:28:16.075 [2024-07-25 09:42:16.487564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.075 [2024-07-25 09:42:16.502402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.075 [2024-07-25 09:42:16.502436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:16.075 [2024-07-25 09:42:16.502445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.836 ms 00:28:16.075 [2024-07-25 09:42:16.502453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.075 [2024-07-25 09:42:16.517267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.075 [2024-07-25 09:42:16.517300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:16.075 [2024-07-25 09:42:16.517325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.790 ms 00:28:16.075 [2024-07-25 09:42:16.517333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.075 [2024-07-25 09:42:16.517363] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:16.075 [2024-07-25 09:42:16.517378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.517995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:16.075 [2024-07-25 09:42:16.518070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:16.076 [2024-07-25 09:42:16.518292] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:16.076 [2024-07-25 09:42:16.518301] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:28:16.076 [2024-07-25 09:42:16.518316] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:16.076 [2024-07-25 09:42:16.518324] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:16.076 [2024-07-25 09:42:16.518334] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:16.076 [2024-07-25 09:42:16.518343] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:16.076 [2024-07-25 09:42:16.518355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:16.076 [2024-07-25 09:42:16.518363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:16.076 [2024-07-25 09:42:16.518374] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:16.076 [2024-07-25 09:42:16.518381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:16.076 [2024-07-25 09:42:16.518405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:16.076 [2024-07-25 09:42:16.518414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.076 [2024-07-25 09:42:16.518427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:16.076 [2024-07-25 09:42:16.518435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.055 ms 00:28:16.076 [2024-07-25 09:42:16.518451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.076 [2024-07-25 09:42:16.537888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.076 [2024-07-25 09:42:16.537923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:16.076 [2024-07-25 09:42:16.537932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.443 ms 00:28:16.076 [2024-07-25 09:42:16.537943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.076 [2024-07-25 09:42:16.538462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:16.076 [2024-07-25 09:42:16.538485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:16.076 [2024-07-25 09:42:16.538496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:28:16.076 [2024-07-25 09:42:16.538505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.076 [2024-07-25 09:42:16.602366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.076 [2024-07-25 09:42:16.602418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:16.076 [2024-07-25 09:42:16.602429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.076 [2024-07-25 09:42:16.602437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.076 [2024-07-25 09:42:16.602521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.076 [2024-07-25 09:42:16.602533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:16.076 [2024-07-25 09:42:16.602544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.076 [2024-07-25 09:42:16.602553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.076 [2024-07-25 09:42:16.602596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.076 [2024-07-25 09:42:16.602609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:16.076 [2024-07-25 09:42:16.602617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.076 [2024-07-25 09:42:16.602628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.076 [2024-07-25 09:42:16.602645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.076 [2024-07-25 09:42:16.602654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:16.076 [2024-07-25 09:42:16.602661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.076 [2024-07-25 09:42:16.602672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.722421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.722477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:16.336 [2024-07-25 09:42:16.722488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.722497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.820649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.820708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:16.336 [2024-07-25 09:42:16.820723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.820732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.820813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.820824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:16.336 [2024-07-25 09:42:16.820832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.820844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.820873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.820882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:16.336 [2024-07-25 09:42:16.820890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.820898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.821011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.821029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:16.336 [2024-07-25 09:42:16.821037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.821045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.821081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.821093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:16.336 [2024-07-25 09:42:16.821100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.821109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.821149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.821162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:16.336 [2024-07-25 09:42:16.821170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.821180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.821223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:16.336 [2024-07-25 09:42:16.821249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:16.336 [2024-07-25 09:42:16.821257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:16.336 [2024-07-25 09:42:16.821266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:16.336 [2024-07-25 09:42:16.821399] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 406.571 ms, result 0 00:28:17.273 09:42:17 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:17.273 09:42:17 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:17.533 [2024-07-25 09:42:17.962954] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:17.533 [2024-07-25 09:42:17.963095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80524 ] 00:28:17.533 [2024-07-25 09:42:18.129671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.792 [2024-07-25 09:42:18.372301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.358 [2024-07-25 09:42:18.774435] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.358 [2024-07-25 09:42:18.774494] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:18.358 [2024-07-25 09:42:18.932590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.358 [2024-07-25 09:42:18.932648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:18.358 [2024-07-25 09:42:18.932663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:18.358 [2024-07-25 09:42:18.932672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.358 [2024-07-25 09:42:18.935649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.358 [2024-07-25 09:42:18.935689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:18.358 [2024-07-25 09:42:18.935699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.962 ms 00:28:18.358 [2024-07-25 09:42:18.935707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.358 [2024-07-25 09:42:18.935809] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:18.359 [2024-07-25 09:42:18.937268] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:18.359 [2024-07-25 09:42:18.937300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.359 [2024-07-25 09:42:18.937309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:18.359 [2024-07-25 09:42:18.937317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.507 ms 00:28:18.359 [2024-07-25 09:42:18.937325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.359 [2024-07-25 09:42:18.938713] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:18.359 [2024-07-25 09:42:18.959232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.359 [2024-07-25 09:42:18.959269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:18.359 [2024-07-25 09:42:18.959285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.558 ms 00:28:18.359 [2024-07-25 09:42:18.959292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.359 [2024-07-25 09:42:18.959385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.359 [2024-07-25 09:42:18.959398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:18.359 [2024-07-25 09:42:18.959409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:18.359 [2024-07-25 09:42:18.959416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.359 [2024-07-25 09:42:18.966131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.359 [2024-07-25 09:42:18.966162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:18.359 [2024-07-25 09:42:18.966171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.691 ms 00:28:18.359 [2024-07-25 09:42:18.966179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.359 [2024-07-25 09:42:18.966283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.359 [2024-07-25 09:42:18.966299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:18.359 [2024-07-25 09:42:18.966309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:28:18.359 [2024-07-25 09:42:18.966317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.359 [2024-07-25 09:42:18.966351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.359 [2024-07-25 09:42:18.966361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:18.359 [2024-07-25 09:42:18.966388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:18.359 [2024-07-25 09:42:18.966396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.359 [2024-07-25 09:42:18.966418] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:18.619 [2024-07-25 09:42:18.972377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.619 [2024-07-25 09:42:18.972414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:18.619 [2024-07-25 09:42:18.972426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.977 ms 00:28:18.619 [2024-07-25 09:42:18.972435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.619 [2024-07-25 09:42:18.972507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.619 [2024-07-25 09:42:18.972522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:18.619 [2024-07-25 09:42:18.972533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:18.619 [2024-07-25 09:42:18.972543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.619 [2024-07-25 09:42:18.972567] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:18.619 [2024-07-25 09:42:18.972597] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:18.619 [2024-07-25 09:42:18.972640] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:18.619 [2024-07-25 09:42:18.972665] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:18.619 [2024-07-25 09:42:18.972766] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:18.619 [2024-07-25 09:42:18.972786] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:18.619 [2024-07-25 09:42:18.972799] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:18.619 [2024-07-25 09:42:18.972813] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:18.619 [2024-07-25 09:42:18.972825] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:18.619 [2024-07-25 09:42:18.972837] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:18.619 [2024-07-25 09:42:18.972847] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:18.619 [2024-07-25 09:42:18.972856] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:18.619 [2024-07-25 09:42:18.972866] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:18.619 [2024-07-25 09:42:18.972878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.619 [2024-07-25 09:42:18.972887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:18.619 [2024-07-25 09:42:18.972896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:28:18.619 [2024-07-25 09:42:18.972906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.619 [2024-07-25 09:42:18.972991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.619 [2024-07-25 09:42:18.973012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:18.619 [2024-07-25 09:42:18.973025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:28:18.619 [2024-07-25 09:42:18.973045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.619 [2024-07-25 09:42:18.973131] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:18.619 [2024-07-25 09:42:18.973147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:18.619 [2024-07-25 09:42:18.973156] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973165] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:18.619 [2024-07-25 09:42:18.973181] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973188] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:18.619 [2024-07-25 09:42:18.973202] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:18.619 [2024-07-25 09:42:18.973216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:18.619 [2024-07-25 09:42:18.973225] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:18.619 [2024-07-25 09:42:18.973243] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:18.619 [2024-07-25 09:42:18.973251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:18.619 [2024-07-25 09:42:18.973260] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:18.619 [2024-07-25 09:42:18.973267] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973274] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:18.619 [2024-07-25 09:42:18.973281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:18.619 [2024-07-25 09:42:18.973318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:18.619 [2024-07-25 09:42:18.973338] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:18.619 [2024-07-25 09:42:18.973361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:18.619 [2024-07-25 09:42:18.973381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973388] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:18.619 [2024-07-25 09:42:18.973401] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973407] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:18.619 [2024-07-25 09:42:18.973415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:18.619 [2024-07-25 09:42:18.973421] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:18.619 [2024-07-25 09:42:18.973428] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:18.619 [2024-07-25 09:42:18.973435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:18.619 [2024-07-25 09:42:18.973442] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:18.619 [2024-07-25 09:42:18.973448] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973455] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:18.619 [2024-07-25 09:42:18.973461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:18.619 [2024-07-25 09:42:18.973468] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973474] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:18.619 [2024-07-25 09:42:18.973482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:18.619 [2024-07-25 09:42:18.973489] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:18.619 [2024-07-25 09:42:18.973498] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:18.619 [2024-07-25 09:42:18.973509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:18.619 [2024-07-25 09:42:18.973517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:18.619 [2024-07-25 09:42:18.973524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:18.620 [2024-07-25 09:42:18.973531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:18.620 [2024-07-25 09:42:18.973538] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:18.620 [2024-07-25 09:42:18.973545] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:18.620 [2024-07-25 09:42:18.973553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:18.620 [2024-07-25 09:42:18.973563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:18.620 [2024-07-25 09:42:18.973579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:18.620 [2024-07-25 09:42:18.973586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:18.620 [2024-07-25 09:42:18.973594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:18.620 [2024-07-25 09:42:18.973603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:18.620 [2024-07-25 09:42:18.973610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:18.620 [2024-07-25 09:42:18.973618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:18.620 [2024-07-25 09:42:18.973625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:18.620 [2024-07-25 09:42:18.973632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:18.620 [2024-07-25 09:42:18.973639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:18.620 [2024-07-25 09:42:18.973680] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:18.620 [2024-07-25 09:42:18.973688] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:18.620 [2024-07-25 09:42:18.973704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:18.620 [2024-07-25 09:42:18.973711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:18.620 [2024-07-25 09:42:18.973718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:18.620 [2024-07-25 09:42:18.973729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:18.973739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:18.620 [2024-07-25 09:42:18.973747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:28:18.620 [2024-07-25 09:42:18.973755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.026791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.026836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:18.620 [2024-07-25 09:42:19.026852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.082 ms 00:28:18.620 [2024-07-25 09:42:19.026860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.027023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.027037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:18.620 [2024-07-25 09:42:19.027046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:18.620 [2024-07-25 09:42:19.027054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.078872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.078912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:18.620 [2024-07-25 09:42:19.078923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.897 ms 00:28:18.620 [2024-07-25 09:42:19.078934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.079021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.079031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:18.620 [2024-07-25 09:42:19.079039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:18.620 [2024-07-25 09:42:19.079047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.079485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.079507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:18.620 [2024-07-25 09:42:19.079515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:28:18.620 [2024-07-25 09:42:19.079522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.079636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.079657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:18.620 [2024-07-25 09:42:19.079666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:28:18.620 [2024-07-25 09:42:19.079673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.100778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.100818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:18.620 [2024-07-25 09:42:19.100829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.122 ms 00:28:18.620 [2024-07-25 09:42:19.100838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.121994] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:18.620 [2024-07-25 09:42:19.122029] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:18.620 [2024-07-25 09:42:19.122041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.122049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:18.620 [2024-07-25 09:42:19.122058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.102 ms 00:28:18.620 [2024-07-25 09:42:19.122064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.152193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.152249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:18.620 [2024-07-25 09:42:19.152261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.113 ms 00:28:18.620 [2024-07-25 09:42:19.152269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.171136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.171171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:18.620 [2024-07-25 09:42:19.171182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.834 ms 00:28:18.620 [2024-07-25 09:42:19.171189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.189604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.189637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:18.620 [2024-07-25 09:42:19.189647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.377 ms 00:28:18.620 [2024-07-25 09:42:19.189654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.620 [2024-07-25 09:42:19.190506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.620 [2024-07-25 09:42:19.190533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:18.620 [2024-07-25 09:42:19.190543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:28:18.620 [2024-07-25 09:42:19.190550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.278465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.278522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:18.880 [2024-07-25 09:42:19.278535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.057 ms 00:28:18.880 [2024-07-25 09:42:19.278542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.290275] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:18.880 [2024-07-25 09:42:19.306869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.306918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:18.880 [2024-07-25 09:42:19.306930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.254 ms 00:28:18.880 [2024-07-25 09:42:19.306938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.307047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.307059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:18.880 [2024-07-25 09:42:19.307068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:18.880 [2024-07-25 09:42:19.307076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.307129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.307139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:18.880 [2024-07-25 09:42:19.307147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:18.880 [2024-07-25 09:42:19.307156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.307176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.307187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:18.880 [2024-07-25 09:42:19.307195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:18.880 [2024-07-25 09:42:19.307202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.307249] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:18.880 [2024-07-25 09:42:19.307261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.307269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:18.880 [2024-07-25 09:42:19.307278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:18.880 [2024-07-25 09:42:19.307286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.346628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.346690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:18.880 [2024-07-25 09:42:19.346703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.396 ms 00:28:18.880 [2024-07-25 09:42:19.346711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.346815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:18.880 [2024-07-25 09:42:19.346826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:18.880 [2024-07-25 09:42:19.346835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:18.880 [2024-07-25 09:42:19.346842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:18.880 [2024-07-25 09:42:19.347774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:18.880 [2024-07-25 09:42:19.352831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 415.702 ms, result 0 00:28:18.880 [2024-07-25 09:42:19.353682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:18.880 [2024-07-25 09:42:19.372524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:28.394  Copying: 29/256 [MB] (29 MBps) Copying: 55/256 [MB] (26 MBps) Copying: 81/256 [MB] (26 MBps) Copying: 108/256 [MB] (26 MBps) Copying: 135/256 [MB] (26 MBps) Copying: 163/256 [MB] (27 MBps) Copying: 189/256 [MB] (26 MBps) Copying: 217/256 [MB] (27 MBps) Copying: 243/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-25 09:42:28.844747] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.394 [2024-07-25 09:42:28.860061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.860108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.394 [2024-07-25 09:42:28.860122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.394 [2024-07-25 09:42:28.860130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.860161] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:28.394 [2024-07-25 09:42:28.864238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.864273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.394 [2024-07-25 09:42:28.864283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:28:28.394 [2024-07-25 09:42:28.864290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.864560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.864581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.394 [2024-07-25 09:42:28.864592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.225 ms 00:28:28.394 [2024-07-25 09:42:28.864600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.867526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.867546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:28.394 [2024-07-25 09:42:28.867559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.915 ms 00:28:28.394 [2024-07-25 09:42:28.867566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.873528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.873557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:28.394 [2024-07-25 09:42:28.873566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.957 ms 00:28:28.394 [2024-07-25 09:42:28.873573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.913195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.913238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:28.394 [2024-07-25 09:42:28.913250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.635 ms 00:28:28.394 [2024-07-25 09:42:28.913257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.936042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.936079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:28.394 [2024-07-25 09:42:28.936091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.794 ms 00:28:28.394 [2024-07-25 09:42:28.936104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.936242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.936254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:28.394 [2024-07-25 09:42:28.936262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:28:28.394 [2024-07-25 09:42:28.936270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.394 [2024-07-25 09:42:28.976046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.394 [2024-07-25 09:42:28.976082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:28.394 [2024-07-25 09:42:28.976093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.835 ms 00:28:28.394 [2024-07-25 09:42:28.976101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.654 [2024-07-25 09:42:29.014769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.654 [2024-07-25 09:42:29.014807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:28.654 [2024-07-25 09:42:29.014817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.686 ms 00:28:28.654 [2024-07-25 09:42:29.014824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.654 [2024-07-25 09:42:29.052835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.654 [2024-07-25 09:42:29.052878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:28.654 [2024-07-25 09:42:29.052889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.046 ms 00:28:28.654 [2024-07-25 09:42:29.052898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.654 [2024-07-25 09:42:29.090782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.654 [2024-07-25 09:42:29.090820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:28.654 [2024-07-25 09:42:29.090830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.889 ms 00:28:28.654 [2024-07-25 09:42:29.090838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.654 [2024-07-25 09:42:29.090873] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.654 [2024-07-25 09:42:29.090891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.090997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.655 [2024-07-25 09:42:29.091600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.656 [2024-07-25 09:42:29.091709] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.656 [2024-07-25 09:42:29.091717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:28:28.656 [2024-07-25 09:42:29.091726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:28.656 [2024-07-25 09:42:29.091733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:28.656 [2024-07-25 09:42:29.091753] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:28.656 [2024-07-25 09:42:29.091761] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:28.656 [2024-07-25 09:42:29.091769] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.656 [2024-07-25 09:42:29.091779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.656 [2024-07-25 09:42:29.091786] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.656 [2024-07-25 09:42:29.091793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.656 [2024-07-25 09:42:29.091800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.656 [2024-07-25 09:42:29.091808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.656 [2024-07-25 09:42:29.091816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.656 [2024-07-25 09:42:29.091837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:28:28.656 [2024-07-25 09:42:29.091844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.656 [2024-07-25 09:42:29.112361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.656 [2024-07-25 09:42:29.112398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.656 [2024-07-25 09:42:29.112410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.534 ms 00:28:28.656 [2024-07-25 09:42:29.112417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.656 [2024-07-25 09:42:29.112939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.656 [2024-07-25 09:42:29.112963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.656 [2024-07-25 09:42:29.112971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.484 ms 00:28:28.656 [2024-07-25 09:42:29.112979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.656 [2024-07-25 09:42:29.160982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.656 [2024-07-25 09:42:29.161025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.656 [2024-07-25 09:42:29.161037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.656 [2024-07-25 09:42:29.161045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.656 [2024-07-25 09:42:29.161131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.656 [2024-07-25 09:42:29.161143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.656 [2024-07-25 09:42:29.161152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.656 [2024-07-25 09:42:29.161159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.656 [2024-07-25 09:42:29.161218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.656 [2024-07-25 09:42:29.161229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.656 [2024-07-25 09:42:29.161237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.656 [2024-07-25 09:42:29.161256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.656 [2024-07-25 09:42:29.161274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.656 [2024-07-25 09:42:29.161282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.656 [2024-07-25 09:42:29.161293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.656 [2024-07-25 09:42:29.161300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.283194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.283247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.916 [2024-07-25 09:42:29.283260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.283267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.384849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.384907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.916 [2024-07-25 09:42:29.384920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.384928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.385026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.916 [2024-07-25 09:42:29.385035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.385043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.385078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.916 [2024-07-25 09:42:29.385085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.385095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.385201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.916 [2024-07-25 09:42:29.385209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.385216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.385277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.916 [2024-07-25 09:42:29.385285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.385293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.385342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.916 [2024-07-25 09:42:29.385350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.385358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.916 [2024-07-25 09:42:29.385412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.916 [2024-07-25 09:42:29.385419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.916 [2024-07-25 09:42:29.385429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.916 [2024-07-25 09:42:29.385561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.509 ms, result 0 00:28:30.294 00:28:30.294 00:28:30.294 09:42:30 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:28:30.294 09:42:30 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:30.553 09:42:31 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:30.812 [2024-07-25 09:42:31.177306] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:30.812 [2024-07-25 09:42:31.177446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80662 ] 00:28:30.812 [2024-07-25 09:42:31.343965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.071 [2024-07-25 09:42:31.577705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.640 [2024-07-25 09:42:31.968565] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.640 [2024-07-25 09:42:31.968643] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:31.640 [2024-07-25 09:42:32.126400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.126449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:31.640 [2024-07-25 09:42:32.126461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:31.640 [2024-07-25 09:42:32.126469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.129329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.129363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:31.640 [2024-07-25 09:42:32.129373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.848 ms 00:28:31.640 [2024-07-25 09:42:32.129380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.129472] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:31.640 [2024-07-25 09:42:32.130483] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:31.640 [2024-07-25 09:42:32.130514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.130523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:31.640 [2024-07-25 09:42:32.130531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:28:31.640 [2024-07-25 09:42:32.130538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.131981] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:31.640 [2024-07-25 09:42:32.151065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.151102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:31.640 [2024-07-25 09:42:32.151134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.121 ms 00:28:31.640 [2024-07-25 09:42:32.151142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.151255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.151268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:31.640 [2024-07-25 09:42:32.151277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:28:31.640 [2024-07-25 09:42:32.151285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.158242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.158269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:31.640 [2024-07-25 09:42:32.158294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.931 ms 00:28:31.640 [2024-07-25 09:42:32.158303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.158394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.158407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:31.640 [2024-07-25 09:42:32.158417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:31.640 [2024-07-25 09:42:32.158424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.158453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.158465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:31.640 [2024-07-25 09:42:32.158473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:31.640 [2024-07-25 09:42:32.158480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.158502] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:31.640 [2024-07-25 09:42:32.163705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.163734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:31.640 [2024-07-25 09:42:32.163760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:28:31.640 [2024-07-25 09:42:32.163768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.163834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.163845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:31.640 [2024-07-25 09:42:32.163853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:31.640 [2024-07-25 09:42:32.163861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.163878] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:31.640 [2024-07-25 09:42:32.163901] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:31.640 [2024-07-25 09:42:32.163934] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:31.640 [2024-07-25 09:42:32.163949] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:31.640 [2024-07-25 09:42:32.164030] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:31.640 [2024-07-25 09:42:32.164040] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:31.640 [2024-07-25 09:42:32.164049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:31.640 [2024-07-25 09:42:32.164058] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164071] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164078] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:31.640 [2024-07-25 09:42:32.164086] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:31.640 [2024-07-25 09:42:32.164093] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:31.640 [2024-07-25 09:42:32.164100] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:31.640 [2024-07-25 09:42:32.164107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.164114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:31.640 [2024-07-25 09:42:32.164121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:28:31.640 [2024-07-25 09:42:32.164129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.164195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.640 [2024-07-25 09:42:32.164206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:31.640 [2024-07-25 09:42:32.164216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:28:31.640 [2024-07-25 09:42:32.164223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.640 [2024-07-25 09:42:32.164319] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:31.640 [2024-07-25 09:42:32.164329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:31.640 [2024-07-25 09:42:32.164339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:31.640 [2024-07-25 09:42:32.164362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:31.640 [2024-07-25 09:42:32.164384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.640 [2024-07-25 09:42:32.164397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:31.640 [2024-07-25 09:42:32.164403] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:31.640 [2024-07-25 09:42:32.164409] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:31.640 [2024-07-25 09:42:32.164417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:31.640 [2024-07-25 09:42:32.164425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:31.640 [2024-07-25 09:42:32.164431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164437] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:31.640 [2024-07-25 09:42:32.164444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:31.640 [2024-07-25 09:42:32.164477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164484] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164490] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:31.640 [2024-07-25 09:42:32.164497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164503] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164509] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:31.640 [2024-07-25 09:42:32.164516] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164522] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:31.640 [2024-07-25 09:42:32.164534] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:31.640 [2024-07-25 09:42:32.164540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:31.640 [2024-07-25 09:42:32.164546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:31.641 [2024-07-25 09:42:32.164553] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:31.641 [2024-07-25 09:42:32.164559] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.641 [2024-07-25 09:42:32.164565] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:31.641 [2024-07-25 09:42:32.164572] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:31.641 [2024-07-25 09:42:32.164578] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:31.641 [2024-07-25 09:42:32.164585] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:31.641 [2024-07-25 09:42:32.164591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:31.641 [2024-07-25 09:42:32.164597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.641 [2024-07-25 09:42:32.164604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:31.641 [2024-07-25 09:42:32.164610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:31.641 [2024-07-25 09:42:32.164616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.641 [2024-07-25 09:42:32.164622] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:31.641 [2024-07-25 09:42:32.164630] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:31.641 [2024-07-25 09:42:32.164636] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:31.641 [2024-07-25 09:42:32.164646] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:31.641 [2024-07-25 09:42:32.164653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:31.641 [2024-07-25 09:42:32.164660] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:31.641 [2024-07-25 09:42:32.164666] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:31.641 [2024-07-25 09:42:32.164674] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:31.641 [2024-07-25 09:42:32.164680] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:31.641 [2024-07-25 09:42:32.164687] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:31.641 [2024-07-25 09:42:32.164695] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:31.641 [2024-07-25 09:42:32.164704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:31.641 [2024-07-25 09:42:32.164722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:31.641 [2024-07-25 09:42:32.164730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:31.641 [2024-07-25 09:42:32.164737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:31.641 [2024-07-25 09:42:32.164744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:31.641 [2024-07-25 09:42:32.164750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:31.641 [2024-07-25 09:42:32.164769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:31.641 [2024-07-25 09:42:32.164777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:31.641 [2024-07-25 09:42:32.164783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:31.641 [2024-07-25 09:42:32.164790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:31.641 [2024-07-25 09:42:32.164839] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:31.641 [2024-07-25 09:42:32.164848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:31.641 [2024-07-25 09:42:32.164864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:31.641 [2024-07-25 09:42:32.164871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:31.641 [2024-07-25 09:42:32.164879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:31.641 [2024-07-25 09:42:32.164886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.641 [2024-07-25 09:42:32.164894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:31.641 [2024-07-25 09:42:32.164901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:28:31.641 [2024-07-25 09:42:32.164909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.641 [2024-07-25 09:42:32.217981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.641 [2024-07-25 09:42:32.218033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:31.641 [2024-07-25 09:42:32.218046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.119 ms 00:28:31.641 [2024-07-25 09:42:32.218053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.641 [2024-07-25 09:42:32.218217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.641 [2024-07-25 09:42:32.218238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:31.641 [2024-07-25 09:42:32.218247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:31.641 [2024-07-25 09:42:32.218254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.266615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.266659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.901 [2024-07-25 09:42:32.266673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.431 ms 00:28:31.901 [2024-07-25 09:42:32.266680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.266767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.266776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.901 [2024-07-25 09:42:32.266785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:31.901 [2024-07-25 09:42:32.266792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.267234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.267260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.901 [2024-07-25 09:42:32.267269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:28:31.901 [2024-07-25 09:42:32.267280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.267394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.267412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.901 [2024-07-25 09:42:32.267420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:28:31.901 [2024-07-25 09:42:32.267427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.287432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.287469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.901 [2024-07-25 09:42:32.287496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.021 ms 00:28:31.901 [2024-07-25 09:42:32.287503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.306967] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:28:31.901 [2024-07-25 09:42:32.307001] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:31.901 [2024-07-25 09:42:32.307030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.307037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:31.901 [2024-07-25 09:42:32.307046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.440 ms 00:28:31.901 [2024-07-25 09:42:32.307053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.337333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.337371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:31.901 [2024-07-25 09:42:32.337398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.265 ms 00:28:31.901 [2024-07-25 09:42:32.337406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.356864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.356902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:31.901 [2024-07-25 09:42:32.356912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.407 ms 00:28:31.901 [2024-07-25 09:42:32.356921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.901 [2024-07-25 09:42:32.376383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.901 [2024-07-25 09:42:32.376421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:31.901 [2024-07-25 09:42:32.376433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.428 ms 00:28:31.901 [2024-07-25 09:42:32.376442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.377351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.377378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:31.902 [2024-07-25 09:42:32.377388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:28:31.902 [2024-07-25 09:42:32.377396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.468017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.468083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:31.902 [2024-07-25 09:42:32.468098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.768 ms 00:28:31.902 [2024-07-25 09:42:32.468106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.481106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:31.902 [2024-07-25 09:42:32.497754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.497822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:31.902 [2024-07-25 09:42:32.497835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.538 ms 00:28:31.902 [2024-07-25 09:42:32.497843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.497958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.497971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:31.902 [2024-07-25 09:42:32.497979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:31.902 [2024-07-25 09:42:32.497986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.498040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.498049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:31.902 [2024-07-25 09:42:32.498057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:31.902 [2024-07-25 09:42:32.498064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.498084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.498095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:31.902 [2024-07-25 09:42:32.498102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:31.902 [2024-07-25 09:42:32.498109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.902 [2024-07-25 09:42:32.498140] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:31.902 [2024-07-25 09:42:32.498150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.902 [2024-07-25 09:42:32.498158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:31.902 [2024-07-25 09:42:32.498165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:31.902 [2024-07-25 09:42:32.498173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.161 [2024-07-25 09:42:32.536173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.536210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:32.162 [2024-07-25 09:42:32.536221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.052 ms 00:28:32.162 [2024-07-25 09:42:32.536234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.162 [2024-07-25 09:42:32.536335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.536347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:32.162 [2024-07-25 09:42:32.536355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:32.162 [2024-07-25 09:42:32.536363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.162 [2024-07-25 09:42:32.537313] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.162 [2024-07-25 09:42:32.542125] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 411.377 ms, result 0 00:28:32.162 [2024-07-25 09:42:32.542944] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.162 [2024-07-25 09:42:32.561203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:32.162  Copying: 4096/4096 [kB] (average 25 MBps)[2024-07-25 09:42:32.720407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:32.162 [2024-07-25 09:42:32.735751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.735796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:32.162 [2024-07-25 09:42:32.735808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:32.162 [2024-07-25 09:42:32.735822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.162 [2024-07-25 09:42:32.735868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:32.162 [2024-07-25 09:42:32.739879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.739908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:32.162 [2024-07-25 09:42:32.739918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.005 ms 00:28:32.162 [2024-07-25 09:42:32.739926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.162 [2024-07-25 09:42:32.741944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.741980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:32.162 [2024-07-25 09:42:32.741991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.997 ms 00:28:32.162 [2024-07-25 09:42:32.741998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.162 [2024-07-25 09:42:32.745313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.745345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:32.162 [2024-07-25 09:42:32.745354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:28:32.162 [2024-07-25 09:42:32.745362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.162 [2024-07-25 09:42:32.751021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.162 [2024-07-25 09:42:32.751052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:32.162 [2024-07-25 09:42:32.751062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.644 ms 00:28:32.162 [2024-07-25 09:42:32.751069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.789647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.789685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:32.423 [2024-07-25 09:42:32.789697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.591 ms 00:28:32.423 [2024-07-25 09:42:32.789704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.812095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.812133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:32.423 [2024-07-25 09:42:32.812149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.380 ms 00:28:32.423 [2024-07-25 09:42:32.812156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.812304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.812317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:32.423 [2024-07-25 09:42:32.812326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:32.423 [2024-07-25 09:42:32.812333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.850979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.851015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:32.423 [2024-07-25 09:42:32.851026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.702 ms 00:28:32.423 [2024-07-25 09:42:32.851034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.888535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.888571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:32.423 [2024-07-25 09:42:32.888582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.524 ms 00:28:32.423 [2024-07-25 09:42:32.888589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.927055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.927101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:32.423 [2024-07-25 09:42:32.927113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.490 ms 00:28:32.423 [2024-07-25 09:42:32.927121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.967953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.423 [2024-07-25 09:42:32.968008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:32.423 [2024-07-25 09:42:32.968022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.798 ms 00:28:32.423 [2024-07-25 09:42:32.968030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.423 [2024-07-25 09:42:32.968101] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:32.423 [2024-07-25 09:42:32.968120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:32.423 [2024-07-25 09:42:32.968133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:32.423 [2024-07-25 09:42:32.968143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:32.424 [2024-07-25 09:42:32.968646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.968998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.969006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.969015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.969024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:32.425 [2024-07-25 09:42:32.969041] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:32.425 [2024-07-25 09:42:32.969053] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:28:32.425 [2024-07-25 09:42:32.969063] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:32.425 [2024-07-25 09:42:32.969085] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:32.425 [2024-07-25 09:42:32.969110] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:32.425 [2024-07-25 09:42:32.969118] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:32.425 [2024-07-25 09:42:32.969125] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:32.425 [2024-07-25 09:42:32.969133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:32.425 [2024-07-25 09:42:32.969141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:32.425 [2024-07-25 09:42:32.969147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:32.425 [2024-07-25 09:42:32.969154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:32.425 [2024-07-25 09:42:32.969162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.425 [2024-07-25 09:42:32.969173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:32.425 [2024-07-25 09:42:32.969185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.064 ms 00:28:32.425 [2024-07-25 09:42:32.969192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.425 [2024-07-25 09:42:32.990979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.425 [2024-07-25 09:42:32.991022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:32.425 [2024-07-25 09:42:32.991033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.803 ms 00:28:32.425 [2024-07-25 09:42:32.991041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.425 [2024-07-25 09:42:32.991671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:32.425 [2024-07-25 09:42:32.991689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:32.425 [2024-07-25 09:42:32.991698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:28:32.425 [2024-07-25 09:42:32.991707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.040504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.040551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:32.686 [2024-07-25 09:42:33.040563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.040572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.040655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.040666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:32.686 [2024-07-25 09:42:33.040674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.040681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.040727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.040737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:32.686 [2024-07-25 09:42:33.040745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.040753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.040771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.040784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:32.686 [2024-07-25 09:42:33.040792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.040799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.162942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.162997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:32.686 [2024-07-25 09:42:33.163011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.163020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.686 [2024-07-25 09:42:33.269164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.686 [2024-07-25 09:42:33.269311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.686 [2024-07-25 09:42:33.269369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.686 [2024-07-25 09:42:33.269487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.686 [2024-07-25 09:42:33.269546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.686 [2024-07-25 09:42:33.269611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.686 [2024-07-25 09:42:33.269671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.686 [2024-07-25 09:42:33.269682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.686 [2024-07-25 09:42:33.269689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.686 [2024-07-25 09:42:33.269821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 535.088 ms, result 0 00:28:34.063 00:28:34.063 00:28:34.063 09:42:34 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=80698 00:28:34.063 09:42:34 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:28:34.063 09:42:34 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 80698 00:28:34.063 09:42:34 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 80698 ']' 00:28:34.063 09:42:34 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.063 09:42:34 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:34.063 09:42:34 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.063 09:42:34 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:34.063 09:42:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:34.063 [2024-07-25 09:42:34.617189] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:34.063 [2024-07-25 09:42:34.617467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80698 ] 00:28:34.321 [2024-07-25 09:42:34.786291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.578 [2024-07-25 09:42:35.039411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.515 09:42:36 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:35.515 09:42:36 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:28:35.515 09:42:36 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:28:35.774 [2024-07-25 09:42:36.257082] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.774 [2024-07-25 09:42:36.257151] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:36.034 [2024-07-25 09:42:36.431381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.431429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:36.034 [2024-07-25 09:42:36.431443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.034 [2024-07-25 09:42:36.431452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.434359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.434392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:36.034 [2024-07-25 09:42:36.434418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:28:36.034 [2024-07-25 09:42:36.434427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.434507] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:36.034 [2024-07-25 09:42:36.435685] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:36.034 [2024-07-25 09:42:36.435713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.435723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:36.034 [2024-07-25 09:42:36.435733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:28:36.034 [2024-07-25 09:42:36.435744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.437201] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:36.034 [2024-07-25 09:42:36.456502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.456535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:36.034 [2024-07-25 09:42:36.456549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.335 ms 00:28:36.034 [2024-07-25 09:42:36.456557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.456665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.456689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:36.034 [2024-07-25 09:42:36.456699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:36.034 [2024-07-25 09:42:36.456706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.463578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.463606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:36.034 [2024-07-25 09:42:36.463621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.837 ms 00:28:36.034 [2024-07-25 09:42:36.463629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.463768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.463781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:36.034 [2024-07-25 09:42:36.463791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:28:36.034 [2024-07-25 09:42:36.463801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.463830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.463845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:36.034 [2024-07-25 09:42:36.463854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:36.034 [2024-07-25 09:42:36.463861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.463904] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:36.034 [2024-07-25 09:42:36.469500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.469537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:36.034 [2024-07-25 09:42:36.469563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.632 ms 00:28:36.034 [2024-07-25 09:42:36.469571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.469631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.469645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:36.034 [2024-07-25 09:42:36.469655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:36.034 [2024-07-25 09:42:36.469663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.469683] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:36.034 [2024-07-25 09:42:36.469703] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:36.034 [2024-07-25 09:42:36.469742] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:36.034 [2024-07-25 09:42:36.469762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:36.034 [2024-07-25 09:42:36.469843] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:36.034 [2024-07-25 09:42:36.469860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:36.034 [2024-07-25 09:42:36.469870] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:36.034 [2024-07-25 09:42:36.469881] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:36.034 [2024-07-25 09:42:36.469889] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:36.034 [2024-07-25 09:42:36.469898] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:36.034 [2024-07-25 09:42:36.469905] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:36.034 [2024-07-25 09:42:36.469913] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:36.034 [2024-07-25 09:42:36.469920] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:36.034 [2024-07-25 09:42:36.469931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.469938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:36.034 [2024-07-25 09:42:36.469963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:28:36.034 [2024-07-25 09:42:36.469973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.470046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.034 [2024-07-25 09:42:36.470054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:36.034 [2024-07-25 09:42:36.470063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:28:36.034 [2024-07-25 09:42:36.470071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.034 [2024-07-25 09:42:36.470165] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:36.034 [2024-07-25 09:42:36.470184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:36.034 [2024-07-25 09:42:36.470194] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.034 [2024-07-25 09:42:36.470202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.034 [2024-07-25 09:42:36.470228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:36.034 [2024-07-25 09:42:36.470234] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:36.034 [2024-07-25 09:42:36.470252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:36.034 [2024-07-25 09:42:36.470260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:36.034 [2024-07-25 09:42:36.470270] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:36.034 [2024-07-25 09:42:36.470277] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.034 [2024-07-25 09:42:36.470286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:36.034 [2024-07-25 09:42:36.470293] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:36.034 [2024-07-25 09:42:36.470301] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:36.034 [2024-07-25 09:42:36.470308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:36.034 [2024-07-25 09:42:36.470316] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:36.034 [2024-07-25 09:42:36.470322] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.034 [2024-07-25 09:42:36.470330] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:36.034 [2024-07-25 09:42:36.470337] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:36.034 [2024-07-25 09:42:36.470345] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:36.035 [2024-07-25 09:42:36.470360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.035 [2024-07-25 09:42:36.470374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:36.035 [2024-07-25 09:42:36.470381] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.035 [2024-07-25 09:42:36.470397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:36.035 [2024-07-25 09:42:36.470405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470420] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.035 [2024-07-25 09:42:36.470429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:36.035 [2024-07-25 09:42:36.470436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:36.035 [2024-07-25 09:42:36.470450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:36.035 [2024-07-25 09:42:36.470458] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470465] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.035 [2024-07-25 09:42:36.470473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:36.035 [2024-07-25 09:42:36.470480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:36.035 [2024-07-25 09:42:36.470487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:36.035 [2024-07-25 09:42:36.470493] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:36.035 [2024-07-25 09:42:36.470501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:36.035 [2024-07-25 09:42:36.470508] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:36.035 [2024-07-25 09:42:36.470524] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:36.035 [2024-07-25 09:42:36.470532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470539] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:36.035 [2024-07-25 09:42:36.470547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:36.035 [2024-07-25 09:42:36.470554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:36.035 [2024-07-25 09:42:36.470562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:36.035 [2024-07-25 09:42:36.470569] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:36.035 [2024-07-25 09:42:36.470577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:36.035 [2024-07-25 09:42:36.470584] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:36.035 [2024-07-25 09:42:36.470592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:36.035 [2024-07-25 09:42:36.470598] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:36.035 [2024-07-25 09:42:36.470607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:36.035 [2024-07-25 09:42:36.470614] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:36.035 [2024-07-25 09:42:36.470626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:36.035 [2024-07-25 09:42:36.470662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:36.035 [2024-07-25 09:42:36.470670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:36.035 [2024-07-25 09:42:36.470679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:36.035 [2024-07-25 09:42:36.470687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:36.035 [2024-07-25 09:42:36.470695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:36.035 [2024-07-25 09:42:36.470702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:36.035 [2024-07-25 09:42:36.470711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:36.035 [2024-07-25 09:42:36.470718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:36.035 [2024-07-25 09:42:36.470726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:36.035 [2024-07-25 09:42:36.470764] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:36.035 [2024-07-25 09:42:36.470774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:36.035 [2024-07-25 09:42:36.470792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:36.035 [2024-07-25 09:42:36.470800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:36.035 [2024-07-25 09:42:36.470808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:36.035 [2024-07-25 09:42:36.470816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.470826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:36.035 [2024-07-25 09:42:36.470834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:28:36.035 [2024-07-25 09:42:36.470845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.515728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.515785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:36.035 [2024-07-25 09:42:36.515801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.908 ms 00:28:36.035 [2024-07-25 09:42:36.515810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.515995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.516009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:36.035 [2024-07-25 09:42:36.516020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:28:36.035 [2024-07-25 09:42:36.516030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.567756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.567800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:36.035 [2024-07-25 09:42:36.567812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.801 ms 00:28:36.035 [2024-07-25 09:42:36.567821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.567950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.567964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:36.035 [2024-07-25 09:42:36.567974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:36.035 [2024-07-25 09:42:36.567985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.568437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.568464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:36.035 [2024-07-25 09:42:36.568474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:28:36.035 [2024-07-25 09:42:36.568485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.568607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.568629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:36.035 [2024-07-25 09:42:36.568638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:28:36.035 [2024-07-25 09:42:36.568648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.591335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.591376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:36.035 [2024-07-25 09:42:36.591388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.706 ms 00:28:36.035 [2024-07-25 09:42:36.591398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.611402] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:36.035 [2024-07-25 09:42:36.611458] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:36.035 [2024-07-25 09:42:36.611473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.611483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:36.035 [2024-07-25 09:42:36.611492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.994 ms 00:28:36.035 [2024-07-25 09:42:36.611501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.035 [2024-07-25 09:42:36.642732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.035 [2024-07-25 09:42:36.642875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:36.035 [2024-07-25 09:42:36.642887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.218 ms 00:28:36.035 [2024-07-25 09:42:36.642898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.662580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.662620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:36.295 [2024-07-25 09:42:36.662642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.646 ms 00:28:36.295 [2024-07-25 09:42:36.662653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.682020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.682055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:36.295 [2024-07-25 09:42:36.682082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.338 ms 00:28:36.295 [2024-07-25 09:42:36.682090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.683069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.683102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:36.295 [2024-07-25 09:42:36.683113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:28:36.295 [2024-07-25 09:42:36.683123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.779226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.779298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:36.295 [2024-07-25 09:42:36.779314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.259 ms 00:28:36.295 [2024-07-25 09:42:36.779324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.791425] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:36.295 [2024-07-25 09:42:36.808472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.808536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:36.295 [2024-07-25 09:42:36.808553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.068 ms 00:28:36.295 [2024-07-25 09:42:36.808561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.808699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.808710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:36.295 [2024-07-25 09:42:36.808722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:36.295 [2024-07-25 09:42:36.808729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.808785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.808794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:36.295 [2024-07-25 09:42:36.808805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:36.295 [2024-07-25 09:42:36.808813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.808838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.808846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:36.295 [2024-07-25 09:42:36.808855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:36.295 [2024-07-25 09:42:36.808863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.808913] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:36.295 [2024-07-25 09:42:36.808923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.808935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:36.295 [2024-07-25 09:42:36.808944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:36.295 [2024-07-25 09:42:36.808956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.847503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.847550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:36.295 [2024-07-25 09:42:36.847563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.597 ms 00:28:36.295 [2024-07-25 09:42:36.847573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.847681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.295 [2024-07-25 09:42:36.847698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:36.295 [2024-07-25 09:42:36.847710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:36.295 [2024-07-25 09:42:36.847718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.295 [2024-07-25 09:42:36.848726] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:36.295 [2024-07-25 09:42:36.854609] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.801 ms, result 0 00:28:36.295 [2024-07-25 09:42:36.855763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:36.295 Some configs were skipped because the RPC state that can call them passed over. 00:28:36.553 09:42:36 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:28:36.553 [2024-07-25 09:42:37.068046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.553 [2024-07-25 09:42:37.068108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:36.553 [2024-07-25 09:42:37.068129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.634 ms 00:28:36.553 [2024-07-25 09:42:37.068138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.553 [2024-07-25 09:42:37.068180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.783 ms, result 0 00:28:36.553 true 00:28:36.553 09:42:37 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:28:36.813 [2024-07-25 09:42:37.307211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:36.813 [2024-07-25 09:42:37.307274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:28:36.813 [2024-07-25 09:42:37.307288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:28:36.813 [2024-07-25 09:42:37.307298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:36.813 [2024-07-25 09:42:37.307337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.224 ms, result 0 00:28:36.813 true 00:28:36.813 09:42:37 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 80698 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80698 ']' 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80698 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80698 00:28:36.813 killing process with pid 80698 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80698' 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 80698 00:28:36.813 09:42:37 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 80698 00:28:38.191 [2024-07-25 09:42:38.505610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.505669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:38.191 [2024-07-25 09:42:38.505700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:38.191 [2024-07-25 09:42:38.505709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.505732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:38.191 [2024-07-25 09:42:38.509688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.509721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:38.191 [2024-07-25 09:42:38.509732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.951 ms 00:28:38.191 [2024-07-25 09:42:38.509743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.510006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.510019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:38.191 [2024-07-25 09:42:38.510028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:28:38.191 [2024-07-25 09:42:38.510036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.513626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.513666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:38.191 [2024-07-25 09:42:38.513676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.580 ms 00:28:38.191 [2024-07-25 09:42:38.513684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.519629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.519663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:38.191 [2024-07-25 09:42:38.519672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.921 ms 00:28:38.191 [2024-07-25 09:42:38.519682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.535361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.535397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:38.191 [2024-07-25 09:42:38.535409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.660 ms 00:28:38.191 [2024-07-25 09:42:38.535419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.546961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.547003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:38.191 [2024-07-25 09:42:38.547014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.515 ms 00:28:38.191 [2024-07-25 09:42:38.547022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.547156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.547169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:38.191 [2024-07-25 09:42:38.547177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:28:38.191 [2024-07-25 09:42:38.547199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.191 [2024-07-25 09:42:38.564073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.191 [2024-07-25 09:42:38.564106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:38.191 [2024-07-25 09:42:38.564116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.887 ms 00:28:38.191 [2024-07-25 09:42:38.564125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.192 [2024-07-25 09:42:38.580377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.192 [2024-07-25 09:42:38.580465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:38.192 [2024-07-25 09:42:38.580495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.238 ms 00:28:38.192 [2024-07-25 09:42:38.580522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.192 [2024-07-25 09:42:38.595779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.192 [2024-07-25 09:42:38.595858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:38.192 [2024-07-25 09:42:38.595910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.229 ms 00:28:38.192 [2024-07-25 09:42:38.595933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.192 [2024-07-25 09:42:38.610809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.192 [2024-07-25 09:42:38.610878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:38.192 [2024-07-25 09:42:38.610909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.828 ms 00:28:38.192 [2024-07-25 09:42:38.610930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.192 [2024-07-25 09:42:38.611002] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:38.192 [2024-07-25 09:42:38.611047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.611956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:38.192 [2024-07-25 09:42:38.612840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.612994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:38.193 [2024-07-25 09:42:38.613082] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:38.193 [2024-07-25 09:42:38.613090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:28:38.193 [2024-07-25 09:42:38.613102] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:38.193 [2024-07-25 09:42:38.613110] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:38.193 [2024-07-25 09:42:38.613120] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:38.193 [2024-07-25 09:42:38.613128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:38.193 [2024-07-25 09:42:38.613137] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:38.193 [2024-07-25 09:42:38.613145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:38.193 [2024-07-25 09:42:38.613154] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:38.193 [2024-07-25 09:42:38.613161] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:38.193 [2024-07-25 09:42:38.613182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:38.193 [2024-07-25 09:42:38.613191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.193 [2024-07-25 09:42:38.613213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:38.193 [2024-07-25 09:42:38.613221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:28:38.193 [2024-07-25 09:42:38.613232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.193 [2024-07-25 09:42:38.634181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.193 [2024-07-25 09:42:38.634212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:38.193 [2024-07-25 09:42:38.634223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.957 ms 00:28:38.193 [2024-07-25 09:42:38.634249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.193 [2024-07-25 09:42:38.634757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:38.193 [2024-07-25 09:42:38.634775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:38.193 [2024-07-25 09:42:38.634787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:28:38.193 [2024-07-25 09:42:38.634796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.193 [2024-07-25 09:42:38.702329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.193 [2024-07-25 09:42:38.702436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:38.193 [2024-07-25 09:42:38.702476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.193 [2024-07-25 09:42:38.702509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.193 [2024-07-25 09:42:38.702667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.193 [2024-07-25 09:42:38.702715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:38.193 [2024-07-25 09:42:38.702752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.193 [2024-07-25 09:42:38.702791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.193 [2024-07-25 09:42:38.702874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.193 [2024-07-25 09:42:38.702923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:38.193 [2024-07-25 09:42:38.702955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.193 [2024-07-25 09:42:38.702993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.193 [2024-07-25 09:42:38.703045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.193 [2024-07-25 09:42:38.703090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:38.193 [2024-07-25 09:42:38.703120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.193 [2024-07-25 09:42:38.703148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.453 [2024-07-25 09:42:38.826679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.453 [2024-07-25 09:42:38.826830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:38.453 [2024-07-25 09:42:38.826870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.453 [2024-07-25 09:42:38.826893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.453 [2024-07-25 09:42:38.929904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.453 [2024-07-25 09:42:38.930043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:38.453 [2024-07-25 09:42:38.930098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.453 [2024-07-25 09:42:38.930121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.453 [2024-07-25 09:42:38.930262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.453 [2024-07-25 09:42:38.930303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:38.453 [2024-07-25 09:42:38.930330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.453 [2024-07-25 09:42:38.930359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.453 [2024-07-25 09:42:38.930419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.453 [2024-07-25 09:42:38.930454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:38.453 [2024-07-25 09:42:38.930481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.453 [2024-07-25 09:42:38.930510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.453 [2024-07-25 09:42:38.930644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.453 [2024-07-25 09:42:38.930688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:38.453 [2024-07-25 09:42:38.930713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.453 [2024-07-25 09:42:38.930738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.453 [2024-07-25 09:42:38.930792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.454 [2024-07-25 09:42:38.930821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:38.454 [2024-07-25 09:42:38.930851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.454 [2024-07-25 09:42:38.930883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.454 [2024-07-25 09:42:38.930939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.454 [2024-07-25 09:42:38.930963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:38.454 [2024-07-25 09:42:38.930988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.454 [2024-07-25 09:42:38.931016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.454 [2024-07-25 09:42:38.931077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:38.454 [2024-07-25 09:42:38.931111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:38.454 [2024-07-25 09:42:38.931135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:38.454 [2024-07-25 09:42:38.931156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:38.454 [2024-07-25 09:42:38.931322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 426.526 ms, result 0 00:28:39.831 09:42:40 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:39.831 [2024-07-25 09:42:40.112220] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:39.831 [2024-07-25 09:42:40.112458] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80767 ] 00:28:39.831 [2024-07-25 09:42:40.276311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.089 [2024-07-25 09:42:40.508882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.348 [2024-07-25 09:42:40.898890] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:40.348 [2024-07-25 09:42:40.899036] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:40.609 [2024-07-25 09:42:41.057646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.057758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:40.609 [2024-07-25 09:42:41.057791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:40.609 [2024-07-25 09:42:41.057812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.060986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.061072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:40.609 [2024-07-25 09:42:41.061100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.146 ms 00:28:40.609 [2024-07-25 09:42:41.061121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.061257] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:40.609 [2024-07-25 09:42:41.062579] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:40.609 [2024-07-25 09:42:41.062658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.062685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:40.609 [2024-07-25 09:42:41.062709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:28:40.609 [2024-07-25 09:42:41.062730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.064308] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:40.609 [2024-07-25 09:42:41.086078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.086154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:40.609 [2024-07-25 09:42:41.086191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.813 ms 00:28:40.609 [2024-07-25 09:42:41.086212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.086368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.086411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:40.609 [2024-07-25 09:42:41.086445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:40.609 [2024-07-25 09:42:41.086477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.093658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.093734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:40.609 [2024-07-25 09:42:41.093765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.117 ms 00:28:40.609 [2024-07-25 09:42:41.093789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.093917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.093963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:40.609 [2024-07-25 09:42:41.093993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:28:40.609 [2024-07-25 09:42:41.094026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.094088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.094125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:40.609 [2024-07-25 09:42:41.094159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:40.609 [2024-07-25 09:42:41.094189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.094255] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:28:40.609 [2024-07-25 09:42:41.100016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.100079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:40.609 [2024-07-25 09:42:41.100111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.780 ms 00:28:40.609 [2024-07-25 09:42:41.100155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.100252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.100266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:40.609 [2024-07-25 09:42:41.100275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:40.609 [2024-07-25 09:42:41.100283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.100307] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:40.609 [2024-07-25 09:42:41.100330] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:40.609 [2024-07-25 09:42:41.100370] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:40.609 [2024-07-25 09:42:41.100386] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:28:40.609 [2024-07-25 09:42:41.100479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:40.609 [2024-07-25 09:42:41.100490] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:40.609 [2024-07-25 09:42:41.100501] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:40.609 [2024-07-25 09:42:41.100511] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:40.609 [2024-07-25 09:42:41.100521] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:40.609 [2024-07-25 09:42:41.100532] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:28:40.609 [2024-07-25 09:42:41.100541] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:40.609 [2024-07-25 09:42:41.100549] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:40.609 [2024-07-25 09:42:41.100556] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:40.609 [2024-07-25 09:42:41.100565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.100573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:40.609 [2024-07-25 09:42:41.100582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:28:40.609 [2024-07-25 09:42:41.100590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.100670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.609 [2024-07-25 09:42:41.100679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:40.609 [2024-07-25 09:42:41.100691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:28:40.609 [2024-07-25 09:42:41.100699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.609 [2024-07-25 09:42:41.100791] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:40.609 [2024-07-25 09:42:41.100802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:40.609 [2024-07-25 09:42:41.100811] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.609 [2024-07-25 09:42:41.100819] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.609 [2024-07-25 09:42:41.100828] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:40.609 [2024-07-25 09:42:41.100835] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:40.609 [2024-07-25 09:42:41.100843] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:28:40.609 [2024-07-25 09:42:41.100851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:40.609 [2024-07-25 09:42:41.100858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:28:40.609 [2024-07-25 09:42:41.100866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.609 [2024-07-25 09:42:41.100876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:40.609 [2024-07-25 09:42:41.100884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:28:40.609 [2024-07-25 09:42:41.100891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:40.609 [2024-07-25 09:42:41.100898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:40.609 [2024-07-25 09:42:41.100905] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:28:40.609 [2024-07-25 09:42:41.100913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.609 [2024-07-25 09:42:41.100920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:40.609 [2024-07-25 09:42:41.100928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:28:40.609 [2024-07-25 09:42:41.100949] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.609 [2024-07-25 09:42:41.100957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:40.610 [2024-07-25 09:42:41.100965] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:28:40.610 [2024-07-25 09:42:41.100972] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.610 [2024-07-25 09:42:41.100979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:40.610 [2024-07-25 09:42:41.100987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:28:40.610 [2024-07-25 09:42:41.100994] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.610 [2024-07-25 09:42:41.101001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:40.610 [2024-07-25 09:42:41.101008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:28:40.610 [2024-07-25 09:42:41.101016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.610 [2024-07-25 09:42:41.101023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:40.610 [2024-07-25 09:42:41.101031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:28:40.610 [2024-07-25 09:42:41.101038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:40.610 [2024-07-25 09:42:41.101045] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:40.610 [2024-07-25 09:42:41.101052] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:28:40.610 [2024-07-25 09:42:41.101059] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.610 [2024-07-25 09:42:41.101066] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:40.610 [2024-07-25 09:42:41.101073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:28:40.610 [2024-07-25 09:42:41.101081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:40.610 [2024-07-25 09:42:41.101089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:40.610 [2024-07-25 09:42:41.101096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:28:40.610 [2024-07-25 09:42:41.101103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.610 [2024-07-25 09:42:41.101111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:40.610 [2024-07-25 09:42:41.101118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:28:40.610 [2024-07-25 09:42:41.101126] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.610 [2024-07-25 09:42:41.101133] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:40.610 [2024-07-25 09:42:41.101141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:40.610 [2024-07-25 09:42:41.101148] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:40.610 [2024-07-25 09:42:41.101157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:40.610 [2024-07-25 09:42:41.101169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:40.610 [2024-07-25 09:42:41.101177] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:40.610 [2024-07-25 09:42:41.101184] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:40.610 [2024-07-25 09:42:41.101191] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:40.610 [2024-07-25 09:42:41.101198] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:40.610 [2024-07-25 09:42:41.101206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:40.610 [2024-07-25 09:42:41.101215] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:40.610 [2024-07-25 09:42:41.101225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:28:40.610 [2024-07-25 09:42:41.101274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:28:40.610 [2024-07-25 09:42:41.101281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:28:40.610 [2024-07-25 09:42:41.101289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:28:40.610 [2024-07-25 09:42:41.101296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:28:40.610 [2024-07-25 09:42:41.101303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:28:40.610 [2024-07-25 09:42:41.101310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:28:40.610 [2024-07-25 09:42:41.101317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:28:40.610 [2024-07-25 09:42:41.101324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:28:40.610 [2024-07-25 09:42:41.101331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:28:40.610 [2024-07-25 09:42:41.101367] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:40.610 [2024-07-25 09:42:41.101374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:40.610 [2024-07-25 09:42:41.101389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:40.610 [2024-07-25 09:42:41.101397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:40.610 [2024-07-25 09:42:41.101406] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:40.610 [2024-07-25 09:42:41.101414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.101422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:40.610 [2024-07-25 09:42:41.101430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:28:40.610 [2024-07-25 09:42:41.101437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.610 [2024-07-25 09:42:41.155248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.155293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:40.610 [2024-07-25 09:42:41.155309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.857 ms 00:28:40.610 [2024-07-25 09:42:41.155317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.610 [2024-07-25 09:42:41.155483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.155496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:40.610 [2024-07-25 09:42:41.155505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:28:40.610 [2024-07-25 09:42:41.155512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.610 [2024-07-25 09:42:41.205499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.205538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:40.610 [2024-07-25 09:42:41.205550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.064 ms 00:28:40.610 [2024-07-25 09:42:41.205561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.610 [2024-07-25 09:42:41.205649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.205658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:40.610 [2024-07-25 09:42:41.205667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:40.610 [2024-07-25 09:42:41.205674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.610 [2024-07-25 09:42:41.206095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.206106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:40.610 [2024-07-25 09:42:41.206114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:28:40.610 [2024-07-25 09:42:41.206121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.610 [2024-07-25 09:42:41.206246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.610 [2024-07-25 09:42:41.206276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:40.610 [2024-07-25 09:42:41.206283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:28:40.610 [2024-07-25 09:42:41.206291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.227206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.227267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:40.870 [2024-07-25 09:42:41.227280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.930 ms 00:28:40.870 [2024-07-25 09:42:41.227288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.247467] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:40.870 [2024-07-25 09:42:41.247505] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:40.870 [2024-07-25 09:42:41.247518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.247526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:40.870 [2024-07-25 09:42:41.247535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.128 ms 00:28:40.870 [2024-07-25 09:42:41.247543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.280114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.280157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:40.870 [2024-07-25 09:42:41.280170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.546 ms 00:28:40.870 [2024-07-25 09:42:41.280178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.300755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.300796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:40.870 [2024-07-25 09:42:41.300808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.500 ms 00:28:40.870 [2024-07-25 09:42:41.300816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.321157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.321196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:40.870 [2024-07-25 09:42:41.321207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.301 ms 00:28:40.870 [2024-07-25 09:42:41.321215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.322107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.322137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:40.870 [2024-07-25 09:42:41.322147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:28:40.870 [2024-07-25 09:42:41.322155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.411748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.411813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:40.870 [2024-07-25 09:42:41.411827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.736 ms 00:28:40.870 [2024-07-25 09:42:41.411835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.870 [2024-07-25 09:42:41.424240] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:28:40.870 [2024-07-25 09:42:41.440798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.870 [2024-07-25 09:42:41.440851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:40.871 [2024-07-25 09:42:41.440865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.856 ms 00:28:40.871 [2024-07-25 09:42:41.440872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.440997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.871 [2024-07-25 09:42:41.441008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:40.871 [2024-07-25 09:42:41.441017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:40.871 [2024-07-25 09:42:41.441023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.441078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.871 [2024-07-25 09:42:41.441086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:40.871 [2024-07-25 09:42:41.441095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:40.871 [2024-07-25 09:42:41.441102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.441124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.871 [2024-07-25 09:42:41.441132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:40.871 [2024-07-25 09:42:41.441140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:40.871 [2024-07-25 09:42:41.441147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.441179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:40.871 [2024-07-25 09:42:41.441188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.871 [2024-07-25 09:42:41.441195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:40.871 [2024-07-25 09:42:41.441202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:40.871 [2024-07-25 09:42:41.441210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.480591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.871 [2024-07-25 09:42:41.480634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:40.871 [2024-07-25 09:42:41.480646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.410 ms 00:28:40.871 [2024-07-25 09:42:41.480654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.480769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.871 [2024-07-25 09:42:41.480782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:40.871 [2024-07-25 09:42:41.480791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:40.871 [2024-07-25 09:42:41.480799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.871 [2024-07-25 09:42:41.481788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:41.130 [2024-07-25 09:42:41.487046] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 424.632 ms, result 0 00:28:41.130 [2024-07-25 09:42:41.487833] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:41.130 [2024-07-25 09:42:41.506779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:51.282  Copying: 31/256 [MB] (31 MBps) Copying: 58/256 [MB] (26 MBps) Copying: 82/256 [MB] (24 MBps) Copying: 109/256 [MB] (26 MBps) Copying: 135/256 [MB] (26 MBps) Copying: 163/256 [MB] (27 MBps) Copying: 191/256 [MB] (28 MBps) Copying: 220/256 [MB] (28 MBps) Copying: 247/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-25 09:42:51.683947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:51.282 [2024-07-25 09:42:51.702732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.282 [2024-07-25 09:42:51.702779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:51.282 [2024-07-25 09:42:51.702793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:51.282 [2024-07-25 09:42:51.702809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.282 [2024-07-25 09:42:51.702834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:28:51.282 [2024-07-25 09:42:51.707198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.282 [2024-07-25 09:42:51.707238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:51.282 [2024-07-25 09:42:51.707248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.359 ms 00:28:51.282 [2024-07-25 09:42:51.707256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.282 [2024-07-25 09:42:51.707522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.282 [2024-07-25 09:42:51.707547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:51.282 [2024-07-25 09:42:51.707557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:28:51.282 [2024-07-25 09:42:51.707565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.282 [2024-07-25 09:42:51.710602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.282 [2024-07-25 09:42:51.710628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:51.282 [2024-07-25 09:42:51.710636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:28:51.282 [2024-07-25 09:42:51.710643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.282 [2024-07-25 09:42:51.716523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.282 [2024-07-25 09:42:51.716556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:51.282 [2024-07-25 09:42:51.716566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.872 ms 00:28:51.282 [2024-07-25 09:42:51.716573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.282 [2024-07-25 09:42:51.758163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.282 [2024-07-25 09:42:51.758246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:51.282 [2024-07-25 09:42:51.758265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.588 ms 00:28:51.283 [2024-07-25 09:42:51.758276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.283 [2024-07-25 09:42:51.783968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.283 [2024-07-25 09:42:51.784035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:51.283 [2024-07-25 09:42:51.784074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.612 ms 00:28:51.283 [2024-07-25 09:42:51.784085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.283 [2024-07-25 09:42:51.784326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.283 [2024-07-25 09:42:51.784348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:51.283 [2024-07-25 09:42:51.784360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:51.283 [2024-07-25 09:42:51.784371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.283 [2024-07-25 09:42:51.826177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.283 [2024-07-25 09:42:51.826243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:28:51.283 [2024-07-25 09:42:51.826256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.863 ms 00:28:51.283 [2024-07-25 09:42:51.826264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.283 [2024-07-25 09:42:51.870094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.283 [2024-07-25 09:42:51.870156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:28:51.283 [2024-07-25 09:42:51.870170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.825 ms 00:28:51.283 [2024-07-25 09:42:51.870177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.553 [2024-07-25 09:42:51.913019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.553 [2024-07-25 09:42:51.913080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:51.553 [2024-07-25 09:42:51.913094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.826 ms 00:28:51.553 [2024-07-25 09:42:51.913103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.553 [2024-07-25 09:42:51.955662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.553 [2024-07-25 09:42:51.955719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:51.553 [2024-07-25 09:42:51.955732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.496 ms 00:28:51.553 [2024-07-25 09:42:51.955740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.553 [2024-07-25 09:42:51.955820] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:51.553 [2024-07-25 09:42:51.955836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.955994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:51.553 [2024-07-25 09:42:51.956504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:51.554 [2024-07-25 09:42:51.956803] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:51.554 [2024-07-25 09:42:51.956812] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1c8aeb57-7dc0-42c8-bc02-351163259a4d 00:28:51.554 [2024-07-25 09:42:51.956821] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:51.554 [2024-07-25 09:42:51.956829] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:51.554 [2024-07-25 09:42:51.956856] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:51.554 [2024-07-25 09:42:51.956866] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:51.554 [2024-07-25 09:42:51.956874] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:51.554 [2024-07-25 09:42:51.956883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:51.554 [2024-07-25 09:42:51.956892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:51.554 [2024-07-25 09:42:51.956900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:51.554 [2024-07-25 09:42:51.956908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:51.554 [2024-07-25 09:42:51.956917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.554 [2024-07-25 09:42:51.956931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:51.554 [2024-07-25 09:42:51.956941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:28:51.554 [2024-07-25 09:42:51.956958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:51.978887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.554 [2024-07-25 09:42:51.978934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:51.554 [2024-07-25 09:42:51.978946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.945 ms 00:28:51.554 [2024-07-25 09:42:51.978953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:51.979533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:51.554 [2024-07-25 09:42:51.979551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:51.554 [2024-07-25 09:42:51.979561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:28:51.554 [2024-07-25 09:42:51.979569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:52.028184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.554 [2024-07-25 09:42:52.028251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:51.554 [2024-07-25 09:42:52.028267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.554 [2024-07-25 09:42:52.028276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:52.028422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.554 [2024-07-25 09:42:52.028434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:51.554 [2024-07-25 09:42:52.028443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.554 [2024-07-25 09:42:52.028450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:52.028511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.554 [2024-07-25 09:42:52.028523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:51.554 [2024-07-25 09:42:52.028531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.554 [2024-07-25 09:42:52.028539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:52.028559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.554 [2024-07-25 09:42:52.028573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:51.554 [2024-07-25 09:42:52.028581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.554 [2024-07-25 09:42:52.028588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.554 [2024-07-25 09:42:52.148767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.554 [2024-07-25 09:42:52.148839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:51.554 [2024-07-25 09:42:52.148855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.554 [2024-07-25 09:42:52.148866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:51.814 [2024-07-25 09:42:52.249107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:51.814 [2024-07-25 09:42:52.249257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:51.814 [2024-07-25 09:42:52.249323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:51.814 [2024-07-25 09:42:52.249454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:51.814 [2024-07-25 09:42:52.249525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:51.814 [2024-07-25 09:42:52.249600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:51.814 [2024-07-25 09:42:52.249674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:51.814 [2024-07-25 09:42:52.249686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:51.814 [2024-07-25 09:42:52.249695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:51.814 [2024-07-25 09:42:52.249866] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.186 ms, result 0 00:28:53.192 00:28:53.192 00:28:53.192 09:42:53 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:53.452 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:28:53.452 09:42:53 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:28:53.452 09:42:53 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:28:53.452 09:42:53 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:53.452 09:42:53 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:53.452 09:42:53 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:28:53.452 09:42:53 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:28:53.452 Process with pid 80698 is not found 00:28:53.452 09:42:54 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 80698 00:28:53.452 09:42:54 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 80698 ']' 00:28:53.452 09:42:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 80698 00:28:53.452 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80698) - No such process 00:28:53.452 09:42:54 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 80698 is not found' 00:28:53.452 00:28:53.452 real 1m14.524s 00:28:53.452 user 1m44.796s 00:28:53.452 sys 0m6.181s 00:28:53.452 09:42:54 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.452 09:42:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:28:53.452 ************************************ 00:28:53.452 END TEST ftl_trim 00:28:53.452 ************************************ 00:28:53.712 09:42:54 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:53.712 09:42:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:28:53.712 09:42:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.712 09:42:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:53.712 ************************************ 00:28:53.712 START TEST ftl_restore 00:28:53.712 ************************************ 00:28:53.712 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:28:53.712 * Looking for test storage... 00:28:53.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.712 09:42:54 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Vmb5Q11u1L 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=80972 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:53.713 09:42:54 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 80972 00:28:53.713 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 80972 ']' 00:28:53.713 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.713 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.713 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.713 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.713 09:42:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:28:53.973 [2024-07-25 09:42:54.344100] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:53.973 [2024-07-25 09:42:54.344901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80972 ] 00:28:53.973 [2024-07-25 09:42:54.510052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.314 [2024-07-25 09:42:54.746839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.265 09:42:55 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.265 09:42:55 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:28:55.265 09:42:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:28:55.265 09:42:55 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:28:55.265 09:42:55 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:55.265 09:42:55 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:28:55.265 09:42:55 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:28:55.265 09:42:55 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:55.524 09:42:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:28:55.525 09:42:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:28:55.525 09:42:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:28:55.525 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:28:55.525 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:55.525 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:28:55.525 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:28:55.525 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:28:55.784 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:55.784 { 00:28:55.784 "name": "nvme0n1", 00:28:55.784 "aliases": [ 00:28:55.784 "a51012b5-94cb-4976-bc84-05487070cbef" 00:28:55.784 ], 00:28:55.784 "product_name": "NVMe disk", 00:28:55.784 "block_size": 4096, 00:28:55.784 "num_blocks": 1310720, 00:28:55.784 "uuid": "a51012b5-94cb-4976-bc84-05487070cbef", 00:28:55.784 "assigned_rate_limits": { 00:28:55.784 "rw_ios_per_sec": 0, 00:28:55.784 "rw_mbytes_per_sec": 0, 00:28:55.784 "r_mbytes_per_sec": 0, 00:28:55.784 "w_mbytes_per_sec": 0 00:28:55.784 }, 00:28:55.784 "claimed": true, 00:28:55.784 "claim_type": "read_many_write_one", 00:28:55.784 "zoned": false, 00:28:55.784 "supported_io_types": { 00:28:55.784 "read": true, 00:28:55.784 "write": true, 00:28:55.784 "unmap": true, 00:28:55.784 "flush": true, 00:28:55.784 "reset": true, 00:28:55.784 "nvme_admin": true, 00:28:55.784 "nvme_io": true, 00:28:55.784 "nvme_io_md": false, 00:28:55.784 "write_zeroes": true, 00:28:55.784 "zcopy": false, 00:28:55.784 "get_zone_info": false, 00:28:55.784 "zone_management": false, 00:28:55.784 "zone_append": false, 00:28:55.784 "compare": true, 00:28:55.784 "compare_and_write": false, 00:28:55.784 "abort": true, 00:28:55.784 "seek_hole": false, 00:28:55.784 "seek_data": false, 00:28:55.784 "copy": true, 00:28:55.784 "nvme_iov_md": false 00:28:55.784 }, 00:28:55.784 "driver_specific": { 00:28:55.784 "nvme": [ 00:28:55.784 { 00:28:55.784 "pci_address": "0000:00:11.0", 00:28:55.784 "trid": { 00:28:55.784 "trtype": "PCIe", 00:28:55.784 "traddr": "0000:00:11.0" 00:28:55.784 }, 00:28:55.784 "ctrlr_data": { 00:28:55.784 "cntlid": 0, 00:28:55.784 "vendor_id": "0x1b36", 00:28:55.784 "model_number": "QEMU NVMe Ctrl", 00:28:55.784 "serial_number": "12341", 00:28:55.785 "firmware_revision": "8.0.0", 00:28:55.785 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:55.785 "oacs": { 00:28:55.785 "security": 0, 00:28:55.785 "format": 1, 00:28:55.785 "firmware": 0, 00:28:55.785 "ns_manage": 1 00:28:55.785 }, 00:28:55.785 "multi_ctrlr": false, 00:28:55.785 "ana_reporting": false 00:28:55.785 }, 00:28:55.785 "vs": { 00:28:55.785 "nvme_version": "1.4" 00:28:55.785 }, 00:28:55.785 "ns_data": { 00:28:55.785 "id": 1, 00:28:55.785 "can_share": false 00:28:55.785 } 00:28:55.785 } 00:28:55.785 ], 00:28:55.785 "mp_policy": "active_passive" 00:28:55.785 } 00:28:55.785 } 00:28:55.785 ]' 00:28:55.785 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:55.785 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:28:55.785 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:55.785 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:55.785 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:55.785 09:42:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:28:55.785 09:42:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:28:55.785 09:42:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:28:55.785 09:42:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:28:55.785 09:42:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:55.785 09:42:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:56.045 09:42:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=454451de-6c76-40af-9c09-a25c17777091 00:28:56.045 09:42:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:28:56.045 09:42:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 454451de-6c76-40af-9c09-a25c17777091 00:28:56.304 09:42:56 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:28:56.304 09:42:56 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=05c0b8bb-ae31-44a5-af6e-8f42c3f044a6 00:28:56.304 09:42:56 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 05c0b8bb-ae31-44a5-af6e-8f42c3f044a6 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=56889785-737f-4397-acb4-6cc815825537 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 56889785-737f-4397-acb4-6cc815825537 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=56889785-737f-4397-acb4-6cc815825537 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:28:56.563 09:42:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 56889785-737f-4397-acb4-6cc815825537 00:28:56.563 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=56889785-737f-4397-acb4-6cc815825537 00:28:56.563 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:56.563 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:28:56.563 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:28:56.563 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56889785-737f-4397-acb4-6cc815825537 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:56.822 { 00:28:56.822 "name": "56889785-737f-4397-acb4-6cc815825537", 00:28:56.822 "aliases": [ 00:28:56.822 "lvs/nvme0n1p0" 00:28:56.822 ], 00:28:56.822 "product_name": "Logical Volume", 00:28:56.822 "block_size": 4096, 00:28:56.822 "num_blocks": 26476544, 00:28:56.822 "uuid": "56889785-737f-4397-acb4-6cc815825537", 00:28:56.822 "assigned_rate_limits": { 00:28:56.822 "rw_ios_per_sec": 0, 00:28:56.822 "rw_mbytes_per_sec": 0, 00:28:56.822 "r_mbytes_per_sec": 0, 00:28:56.822 "w_mbytes_per_sec": 0 00:28:56.822 }, 00:28:56.822 "claimed": false, 00:28:56.822 "zoned": false, 00:28:56.822 "supported_io_types": { 00:28:56.822 "read": true, 00:28:56.822 "write": true, 00:28:56.822 "unmap": true, 00:28:56.822 "flush": false, 00:28:56.822 "reset": true, 00:28:56.822 "nvme_admin": false, 00:28:56.822 "nvme_io": false, 00:28:56.822 "nvme_io_md": false, 00:28:56.822 "write_zeroes": true, 00:28:56.822 "zcopy": false, 00:28:56.822 "get_zone_info": false, 00:28:56.822 "zone_management": false, 00:28:56.822 "zone_append": false, 00:28:56.822 "compare": false, 00:28:56.822 "compare_and_write": false, 00:28:56.822 "abort": false, 00:28:56.822 "seek_hole": true, 00:28:56.822 "seek_data": true, 00:28:56.822 "copy": false, 00:28:56.822 "nvme_iov_md": false 00:28:56.822 }, 00:28:56.822 "driver_specific": { 00:28:56.822 "lvol": { 00:28:56.822 "lvol_store_uuid": "05c0b8bb-ae31-44a5-af6e-8f42c3f044a6", 00:28:56.822 "base_bdev": "nvme0n1", 00:28:56.822 "thin_provision": true, 00:28:56.822 "num_allocated_clusters": 0, 00:28:56.822 "snapshot": false, 00:28:56.822 "clone": false, 00:28:56.822 "esnap_clone": false 00:28:56.822 } 00:28:56.822 } 00:28:56.822 } 00:28:56.822 ]' 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:56.822 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:28:56.822 09:42:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:28:56.822 09:42:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:28:56.822 09:42:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:28:57.081 09:42:57 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:28:57.081 09:42:57 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:28:57.082 09:42:57 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 56889785-737f-4397-acb4-6cc815825537 00:28:57.082 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=56889785-737f-4397-acb4-6cc815825537 00:28:57.082 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:57.082 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:28:57.082 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:28:57.082 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56889785-737f-4397-acb4-6cc815825537 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:57.340 { 00:28:57.340 "name": "56889785-737f-4397-acb4-6cc815825537", 00:28:57.340 "aliases": [ 00:28:57.340 "lvs/nvme0n1p0" 00:28:57.340 ], 00:28:57.340 "product_name": "Logical Volume", 00:28:57.340 "block_size": 4096, 00:28:57.340 "num_blocks": 26476544, 00:28:57.340 "uuid": "56889785-737f-4397-acb4-6cc815825537", 00:28:57.340 "assigned_rate_limits": { 00:28:57.340 "rw_ios_per_sec": 0, 00:28:57.340 "rw_mbytes_per_sec": 0, 00:28:57.340 "r_mbytes_per_sec": 0, 00:28:57.340 "w_mbytes_per_sec": 0 00:28:57.340 }, 00:28:57.340 "claimed": false, 00:28:57.340 "zoned": false, 00:28:57.340 "supported_io_types": { 00:28:57.340 "read": true, 00:28:57.340 "write": true, 00:28:57.340 "unmap": true, 00:28:57.340 "flush": false, 00:28:57.340 "reset": true, 00:28:57.340 "nvme_admin": false, 00:28:57.340 "nvme_io": false, 00:28:57.340 "nvme_io_md": false, 00:28:57.340 "write_zeroes": true, 00:28:57.340 "zcopy": false, 00:28:57.340 "get_zone_info": false, 00:28:57.340 "zone_management": false, 00:28:57.340 "zone_append": false, 00:28:57.340 "compare": false, 00:28:57.340 "compare_and_write": false, 00:28:57.340 "abort": false, 00:28:57.340 "seek_hole": true, 00:28:57.340 "seek_data": true, 00:28:57.340 "copy": false, 00:28:57.340 "nvme_iov_md": false 00:28:57.340 }, 00:28:57.340 "driver_specific": { 00:28:57.340 "lvol": { 00:28:57.340 "lvol_store_uuid": "05c0b8bb-ae31-44a5-af6e-8f42c3f044a6", 00:28:57.340 "base_bdev": "nvme0n1", 00:28:57.340 "thin_provision": true, 00:28:57.340 "num_allocated_clusters": 0, 00:28:57.340 "snapshot": false, 00:28:57.340 "clone": false, 00:28:57.340 "esnap_clone": false 00:28:57.340 } 00:28:57.340 } 00:28:57.340 } 00:28:57.340 ]' 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:57.340 09:42:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:28:57.340 09:42:57 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:28:57.340 09:42:57 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:28:57.599 09:42:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:28:57.599 09:42:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 56889785-737f-4397-acb4-6cc815825537 00:28:57.599 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=56889785-737f-4397-acb4-6cc815825537 00:28:57.599 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:57.599 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:28:57.599 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:28:57.599 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56889785-737f-4397-acb4-6cc815825537 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:57.857 { 00:28:57.857 "name": "56889785-737f-4397-acb4-6cc815825537", 00:28:57.857 "aliases": [ 00:28:57.857 "lvs/nvme0n1p0" 00:28:57.857 ], 00:28:57.857 "product_name": "Logical Volume", 00:28:57.857 "block_size": 4096, 00:28:57.857 "num_blocks": 26476544, 00:28:57.857 "uuid": "56889785-737f-4397-acb4-6cc815825537", 00:28:57.857 "assigned_rate_limits": { 00:28:57.857 "rw_ios_per_sec": 0, 00:28:57.857 "rw_mbytes_per_sec": 0, 00:28:57.857 "r_mbytes_per_sec": 0, 00:28:57.857 "w_mbytes_per_sec": 0 00:28:57.857 }, 00:28:57.857 "claimed": false, 00:28:57.857 "zoned": false, 00:28:57.857 "supported_io_types": { 00:28:57.857 "read": true, 00:28:57.857 "write": true, 00:28:57.857 "unmap": true, 00:28:57.857 "flush": false, 00:28:57.857 "reset": true, 00:28:57.857 "nvme_admin": false, 00:28:57.857 "nvme_io": false, 00:28:57.857 "nvme_io_md": false, 00:28:57.857 "write_zeroes": true, 00:28:57.857 "zcopy": false, 00:28:57.857 "get_zone_info": false, 00:28:57.857 "zone_management": false, 00:28:57.857 "zone_append": false, 00:28:57.857 "compare": false, 00:28:57.857 "compare_and_write": false, 00:28:57.857 "abort": false, 00:28:57.857 "seek_hole": true, 00:28:57.857 "seek_data": true, 00:28:57.857 "copy": false, 00:28:57.857 "nvme_iov_md": false 00:28:57.857 }, 00:28:57.857 "driver_specific": { 00:28:57.857 "lvol": { 00:28:57.857 "lvol_store_uuid": "05c0b8bb-ae31-44a5-af6e-8f42c3f044a6", 00:28:57.857 "base_bdev": "nvme0n1", 00:28:57.857 "thin_provision": true, 00:28:57.857 "num_allocated_clusters": 0, 00:28:57.857 "snapshot": false, 00:28:57.857 "clone": false, 00:28:57.857 "esnap_clone": false 00:28:57.857 } 00:28:57.857 } 00:28:57.857 } 00:28:57.857 ]' 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:28:57.857 09:42:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 56889785-737f-4397-acb4-6cc815825537 --l2p_dram_limit 10' 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:28:57.857 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:28:57.857 09:42:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 56889785-737f-4397-acb4-6cc815825537 --l2p_dram_limit 10 -c nvc0n1p0 00:28:58.118 [2024-07-25 09:42:58.547070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.547212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:58.118 [2024-07-25 09:42:58.547280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:58.118 [2024-07-25 09:42:58.547305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.547393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.547420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:58.118 [2024-07-25 09:42:58.547488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:28:58.118 [2024-07-25 09:42:58.547546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.547606] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:58.118 [2024-07-25 09:42:58.548838] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:58.118 [2024-07-25 09:42:58.548909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.548944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:58.118 [2024-07-25 09:42:58.548967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:28:58.118 [2024-07-25 09:42:58.548999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.549100] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3be97080-f2b7-443f-968a-fc5f7b226e09 00:28:58.118 [2024-07-25 09:42:58.550530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.550595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:28:58.118 [2024-07-25 09:42:58.550631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:28:58.118 [2024-07-25 09:42:58.550652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.557908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.557978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:58.118 [2024-07-25 09:42:58.558022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.198 ms 00:28:58.118 [2024-07-25 09:42:58.558043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.558156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.558184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:58.118 [2024-07-25 09:42:58.558206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:28:58.118 [2024-07-25 09:42:58.558259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.558367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.558405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:58.118 [2024-07-25 09:42:58.558440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:58.118 [2024-07-25 09:42:58.558470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.558520] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:58.118 [2024-07-25 09:42:58.564628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.564693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:58.118 [2024-07-25 09:42:58.564725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:28:58.118 [2024-07-25 09:42:58.564748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.564802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.564837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:58.118 [2024-07-25 09:42:58.564866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:58.118 [2024-07-25 09:42:58.564888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.564945] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:28:58.118 [2024-07-25 09:42:58.565099] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:58.118 [2024-07-25 09:42:58.565144] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:58.118 [2024-07-25 09:42:58.565190] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:28:58.118 [2024-07-25 09:42:58.565272] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:58.118 [2024-07-25 09:42:58.565285] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:58.118 [2024-07-25 09:42:58.565294] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:58.118 [2024-07-25 09:42:58.565307] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:58.118 [2024-07-25 09:42:58.565315] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:58.118 [2024-07-25 09:42:58.565323] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:58.118 [2024-07-25 09:42:58.565331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.565341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:58.118 [2024-07-25 09:42:58.565349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:28:58.118 [2024-07-25 09:42:58.565357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.565430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.118 [2024-07-25 09:42:58.565441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:58.118 [2024-07-25 09:42:58.565449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:58.118 [2024-07-25 09:42:58.565459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.118 [2024-07-25 09:42:58.565543] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:58.118 [2024-07-25 09:42:58.565556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:58.118 [2024-07-25 09:42:58.565577] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.118 [2024-07-25 09:42:58.565588] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.118 [2024-07-25 09:42:58.565596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:58.118 [2024-07-25 09:42:58.565605] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:58.118 [2024-07-25 09:42:58.565612] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:58.118 [2024-07-25 09:42:58.565621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:58.118 [2024-07-25 09:42:58.565628] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:58.118 [2024-07-25 09:42:58.565638] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.118 [2024-07-25 09:42:58.565645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:58.118 [2024-07-25 09:42:58.565653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:58.118 [2024-07-25 09:42:58.565660] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:58.118 [2024-07-25 09:42:58.565669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:58.118 [2024-07-25 09:42:58.565676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:58.118 [2024-07-25 09:42:58.565685] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.118 [2024-07-25 09:42:58.565691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:58.118 [2024-07-25 09:42:58.565701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:58.118 [2024-07-25 09:42:58.565708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.118 [2024-07-25 09:42:58.565716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:58.118 [2024-07-25 09:42:58.565723] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565731] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.119 [2024-07-25 09:42:58.565738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:58.119 [2024-07-25 09:42:58.565746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565753] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.119 [2024-07-25 09:42:58.565761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:58.119 [2024-07-25 09:42:58.565767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565775] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.119 [2024-07-25 09:42:58.565782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:58.119 [2024-07-25 09:42:58.565790] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565796] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:58.119 [2024-07-25 09:42:58.565804] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:58.119 [2024-07-25 09:42:58.565810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565820] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.119 [2024-07-25 09:42:58.565827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:58.119 [2024-07-25 09:42:58.565836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:58.119 [2024-07-25 09:42:58.565842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:58.119 [2024-07-25 09:42:58.565851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:58.119 [2024-07-25 09:42:58.565858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:58.119 [2024-07-25 09:42:58.565865] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:58.119 [2024-07-25 09:42:58.565881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:58.119 [2024-07-25 09:42:58.565887] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565894] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:58.119 [2024-07-25 09:42:58.565902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:58.119 [2024-07-25 09:42:58.565914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:58.119 [2024-07-25 09:42:58.565921] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:58.119 [2024-07-25 09:42:58.565930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:58.119 [2024-07-25 09:42:58.565938] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:58.119 [2024-07-25 09:42:58.565947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:58.119 [2024-07-25 09:42:58.565954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:58.119 [2024-07-25 09:42:58.565962] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:58.119 [2024-07-25 09:42:58.565969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:58.119 [2024-07-25 09:42:58.565981] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:58.119 [2024-07-25 09:42:58.565992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:58.119 [2024-07-25 09:42:58.566010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:58.119 [2024-07-25 09:42:58.566019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:58.119 [2024-07-25 09:42:58.566026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:58.119 [2024-07-25 09:42:58.566036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:58.119 [2024-07-25 09:42:58.566043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:58.119 [2024-07-25 09:42:58.566052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:58.119 [2024-07-25 09:42:58.566069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:58.119 [2024-07-25 09:42:58.566078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:58.119 [2024-07-25 09:42:58.566085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:58.119 [2024-07-25 09:42:58.566128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:58.119 [2024-07-25 09:42:58.566136] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:58.119 [2024-07-25 09:42:58.566153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:58.119 [2024-07-25 09:42:58.566163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:58.119 [2024-07-25 09:42:58.566170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:58.119 [2024-07-25 09:42:58.566180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:58.119 [2024-07-25 09:42:58.566187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:58.119 [2024-07-25 09:42:58.566198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:28:58.119 [2024-07-25 09:42:58.566205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:58.119 [2024-07-25 09:42:58.566259] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:28:58.119 [2024-07-25 09:42:58.566269] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:02.323 [2024-07-25 09:43:02.203719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.203784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:02.323 [2024-07-25 09:43:02.203800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3644.464 ms 00:29:02.323 [2024-07-25 09:43:02.203809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.248614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.248669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:02.323 [2024-07-25 09:43:02.248686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.571 ms 00:29:02.323 [2024-07-25 09:43:02.248695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.248866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.248880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:02.323 [2024-07-25 09:43:02.248896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:02.323 [2024-07-25 09:43:02.248905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.300635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.300687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:02.323 [2024-07-25 09:43:02.300704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.743 ms 00:29:02.323 [2024-07-25 09:43:02.300713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.300772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.300781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:02.323 [2024-07-25 09:43:02.300797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:02.323 [2024-07-25 09:43:02.300806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.301317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.301330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:02.323 [2024-07-25 09:43:02.301341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:29:02.323 [2024-07-25 09:43:02.301349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.301455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.301469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:02.323 [2024-07-25 09:43:02.301479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:29:02.323 [2024-07-25 09:43:02.301485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.323 [2024-07-25 09:43:02.322341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.323 [2024-07-25 09:43:02.322384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:02.323 [2024-07-25 09:43:02.322398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.870 ms 00:29:02.324 [2024-07-25 09:43:02.322406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.335518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:02.324 [2024-07-25 09:43:02.338755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.338791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:02.324 [2024-07-25 09:43:02.338803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.276 ms 00:29:02.324 [2024-07-25 09:43:02.338812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.448469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.448529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:02.324 [2024-07-25 09:43:02.448544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.827 ms 00:29:02.324 [2024-07-25 09:43:02.448554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.448736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.448751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:02.324 [2024-07-25 09:43:02.448770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:29:02.324 [2024-07-25 09:43:02.448782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.487317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.487367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:02.324 [2024-07-25 09:43:02.487381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.564 ms 00:29:02.324 [2024-07-25 09:43:02.487393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.523809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.523851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:02.324 [2024-07-25 09:43:02.523864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.445 ms 00:29:02.324 [2024-07-25 09:43:02.523874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.524738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.524770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:02.324 [2024-07-25 09:43:02.524782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:29:02.324 [2024-07-25 09:43:02.524791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.635760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.635821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:02.324 [2024-07-25 09:43:02.635835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.131 ms 00:29:02.324 [2024-07-25 09:43:02.635847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.675798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.675938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:02.324 [2024-07-25 09:43:02.675972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.988 ms 00:29:02.324 [2024-07-25 09:43:02.675994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.717065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.717249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:02.324 [2024-07-25 09:43:02.717280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.091 ms 00:29:02.324 [2024-07-25 09:43:02.717301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.756085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.756181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:02.324 [2024-07-25 09:43:02.756216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.808 ms 00:29:02.324 [2024-07-25 09:43:02.756254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.756312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.756377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:02.324 [2024-07-25 09:43:02.756438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:02.324 [2024-07-25 09:43:02.756452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.756544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.324 [2024-07-25 09:43:02.756559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:02.324 [2024-07-25 09:43:02.756567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:02.324 [2024-07-25 09:43:02.756576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.324 [2024-07-25 09:43:02.757639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4218.186 ms, result 0 00:29:02.324 { 00:29:02.324 "name": "ftl0", 00:29:02.324 "uuid": "3be97080-f2b7-443f-968a-fc5f7b226e09" 00:29:02.324 } 00:29:02.324 09:43:02 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:02.324 09:43:02 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:02.583 09:43:02 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:29:02.583 09:43:02 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:02.583 [2024-07-25 09:43:03.184283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.583 [2024-07-25 09:43:03.184353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:02.583 [2024-07-25 09:43:03.184373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:02.583 [2024-07-25 09:43:03.184383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.583 [2024-07-25 09:43:03.184415] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:02.583 [2024-07-25 09:43:03.189177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.583 [2024-07-25 09:43:03.189221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:02.583 [2024-07-25 09:43:03.189245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.751 ms 00:29:02.583 [2024-07-25 09:43:03.189258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.583 [2024-07-25 09:43:03.189568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.583 [2024-07-25 09:43:03.189593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:02.584 [2024-07-25 09:43:03.189621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:29:02.584 [2024-07-25 09:43:03.189634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.584 [2024-07-25 09:43:03.192645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.584 [2024-07-25 09:43:03.192709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:02.584 [2024-07-25 09:43:03.192751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:29:02.584 [2024-07-25 09:43:03.192782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.199258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.199369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:02.844 [2024-07-25 09:43:03.199404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.442 ms 00:29:02.844 [2024-07-25 09:43:03.199431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.239678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.239852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:02.844 [2024-07-25 09:43:03.239901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.212 ms 00:29:02.844 [2024-07-25 09:43:03.239930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.263258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.263399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:02.844 [2024-07-25 09:43:03.263433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.283 ms 00:29:02.844 [2024-07-25 09:43:03.263455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.263698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.263754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:02.844 [2024-07-25 09:43:03.263788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:29:02.844 [2024-07-25 09:43:03.263814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.300604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.300700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:02.844 [2024-07-25 09:43:03.300732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.814 ms 00:29:02.844 [2024-07-25 09:43:03.300753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.337043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.337138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:02.844 [2024-07-25 09:43:03.337168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.310 ms 00:29:02.844 [2024-07-25 09:43:03.337190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.373378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.373468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:02.844 [2024-07-25 09:43:03.373499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.198 ms 00:29:02.844 [2024-07-25 09:43:03.373521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.410207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.844 [2024-07-25 09:43:03.410312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:02.844 [2024-07-25 09:43:03.410342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.642 ms 00:29:02.844 [2024-07-25 09:43:03.410367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.844 [2024-07-25 09:43:03.410422] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:02.844 [2024-07-25 09:43:03.410454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:02.844 [2024-07-25 09:43:03.410808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.410991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:02.845 [2024-07-25 09:43:03.411537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:02.846 [2024-07-25 09:43:03.411553] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:02.846 [2024-07-25 09:43:03.411560] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3be97080-f2b7-443f-968a-fc5f7b226e09 00:29:02.846 [2024-07-25 09:43:03.411569] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:02.846 [2024-07-25 09:43:03.411576] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:02.846 [2024-07-25 09:43:03.411587] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:02.846 [2024-07-25 09:43:03.411594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:02.846 [2024-07-25 09:43:03.411602] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:02.846 [2024-07-25 09:43:03.411609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:02.846 [2024-07-25 09:43:03.411618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:02.846 [2024-07-25 09:43:03.411624] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:02.846 [2024-07-25 09:43:03.411631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:02.846 [2024-07-25 09:43:03.411639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.846 [2024-07-25 09:43:03.411649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:02.846 [2024-07-25 09:43:03.411658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:29:02.846 [2024-07-25 09:43:03.411668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.846 [2024-07-25 09:43:03.432008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.846 [2024-07-25 09:43:03.432090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:02.846 [2024-07-25 09:43:03.432121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.331 ms 00:29:02.846 [2024-07-25 09:43:03.432143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:02.846 [2024-07-25 09:43:03.432669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:02.846 [2024-07-25 09:43:03.432717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:02.846 [2024-07-25 09:43:03.432754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:29:02.846 [2024-07-25 09:43:03.432777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.105 [2024-07-25 09:43:03.497849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.105 [2024-07-25 09:43:03.497975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.105 [2024-07-25 09:43:03.498008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.105 [2024-07-25 09:43:03.498044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.105 [2024-07-25 09:43:03.498129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.105 [2024-07-25 09:43:03.498177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.105 [2024-07-25 09:43:03.498266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.105 [2024-07-25 09:43:03.498298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.105 [2024-07-25 09:43:03.498441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.105 [2024-07-25 09:43:03.498494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.105 [2024-07-25 09:43:03.498526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.105 [2024-07-25 09:43:03.498555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.105 [2024-07-25 09:43:03.498603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.105 [2024-07-25 09:43:03.498642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.105 [2024-07-25 09:43:03.498669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.105 [2024-07-25 09:43:03.498694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.105 [2024-07-25 09:43:03.623437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.105 [2024-07-25 09:43:03.623542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.105 [2024-07-25 09:43:03.623575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.105 [2024-07-25 09:43:03.623597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.725588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.725704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.364 [2024-07-25 09:43:03.725733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.725755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.725895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.725966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.364 [2024-07-25 09:43:03.725996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.726022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.726102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.726146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.364 [2024-07-25 09:43:03.726171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.726201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.726354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.726402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.364 [2024-07-25 09:43:03.726431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.726455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.726530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.726570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:03.364 [2024-07-25 09:43:03.726595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.726624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.726679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.726712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.364 [2024-07-25 09:43:03.726736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.726758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.726829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:03.364 [2024-07-25 09:43:03.726867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.364 [2024-07-25 09:43:03.726893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:03.364 [2024-07-25 09:43:03.726921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.364 [2024-07-25 09:43:03.727069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.818 ms, result 0 00:29:03.364 true 00:29:03.364 09:43:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 80972 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80972 ']' 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80972 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80972 00:29:03.364 killing process with pid 80972 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80972' 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 80972 00:29:03.364 09:43:03 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 80972 00:29:13.352 09:43:13 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:29:17.549 262144+0 records in 00:29:17.549 262144+0 records out 00:29:17.549 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.54397 s, 303 MB/s 00:29:17.549 09:43:17 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:18.928 09:43:19 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:18.928 [2024-07-25 09:43:19.208615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:18.928 [2024-07-25 09:43:19.208723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81287 ] 00:29:18.928 [2024-07-25 09:43:19.350314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.216 [2024-07-25 09:43:19.576744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.493 [2024-07-25 09:43:19.961738] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.493 [2024-07-25 09:43:19.961800] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:19.753 [2024-07-25 09:43:20.117780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.117902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:19.753 [2024-07-25 09:43:20.117937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:19.753 [2024-07-25 09:43:20.117958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.118025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.118050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:19.753 [2024-07-25 09:43:20.118080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:19.753 [2024-07-25 09:43:20.118105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.118171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:19.753 [2024-07-25 09:43:20.119345] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:19.753 [2024-07-25 09:43:20.119422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.119457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:19.753 [2024-07-25 09:43:20.119481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.262 ms 00:29:19.753 [2024-07-25 09:43:20.119532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.121016] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:19.753 [2024-07-25 09:43:20.141208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.141301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:19.753 [2024-07-25 09:43:20.141319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.231 ms 00:29:19.753 [2024-07-25 09:43:20.141328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.141401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.141415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:19.753 [2024-07-25 09:43:20.141425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:29:19.753 [2024-07-25 09:43:20.141433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.148519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.148550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:19.753 [2024-07-25 09:43:20.148561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.024 ms 00:29:19.753 [2024-07-25 09:43:20.148569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.148647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.148661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:19.753 [2024-07-25 09:43:20.148670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:19.753 [2024-07-25 09:43:20.148677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.148725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.148735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:19.753 [2024-07-25 09:43:20.148742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:19.753 [2024-07-25 09:43:20.148749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.148772] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:19.753 [2024-07-25 09:43:20.154502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.154533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:19.753 [2024-07-25 09:43:20.154543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.748 ms 00:29:19.753 [2024-07-25 09:43:20.154550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.154583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.154591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:19.753 [2024-07-25 09:43:20.154600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:19.753 [2024-07-25 09:43:20.154607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.154654] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:19.753 [2024-07-25 09:43:20.154675] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:19.753 [2024-07-25 09:43:20.154709] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:19.753 [2024-07-25 09:43:20.154725] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:19.753 [2024-07-25 09:43:20.154806] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:19.753 [2024-07-25 09:43:20.154816] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:19.753 [2024-07-25 09:43:20.154826] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:19.753 [2024-07-25 09:43:20.154837] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:19.753 [2024-07-25 09:43:20.154845] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:19.753 [2024-07-25 09:43:20.154853] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:19.753 [2024-07-25 09:43:20.154861] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:19.753 [2024-07-25 09:43:20.154868] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:19.753 [2024-07-25 09:43:20.154875] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:19.753 [2024-07-25 09:43:20.154883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.154892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:19.753 [2024-07-25 09:43:20.154901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:29:19.753 [2024-07-25 09:43:20.154907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.154974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.753 [2024-07-25 09:43:20.154987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:19.753 [2024-07-25 09:43:20.154994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:19.753 [2024-07-25 09:43:20.155000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.753 [2024-07-25 09:43:20.155081] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:19.753 [2024-07-25 09:43:20.155091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:19.753 [2024-07-25 09:43:20.155101] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.753 [2024-07-25 09:43:20.155108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.753 [2024-07-25 09:43:20.155116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:19.753 [2024-07-25 09:43:20.155122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:19.753 [2024-07-25 09:43:20.155129] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:19.753 [2024-07-25 09:43:20.155136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:19.753 [2024-07-25 09:43:20.155144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:19.753 [2024-07-25 09:43:20.155152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.753 [2024-07-25 09:43:20.155158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:19.753 [2024-07-25 09:43:20.155165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:19.753 [2024-07-25 09:43:20.155172] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:19.753 [2024-07-25 09:43:20.155178] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:19.753 [2024-07-25 09:43:20.155185] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:19.753 [2024-07-25 09:43:20.155191] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.753 [2024-07-25 09:43:20.155198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:19.753 [2024-07-25 09:43:20.155221] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:19.753 [2024-07-25 09:43:20.155229] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.753 [2024-07-25 09:43:20.155236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:19.753 [2024-07-25 09:43:20.155271] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:19.753 [2024-07-25 09:43:20.155279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.753 [2024-07-25 09:43:20.155286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:19.753 [2024-07-25 09:43:20.155294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155302] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.754 [2024-07-25 09:43:20.155310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:19.754 [2024-07-25 09:43:20.155317] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155324] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.754 [2024-07-25 09:43:20.155331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:19.754 [2024-07-25 09:43:20.155339] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:19.754 [2024-07-25 09:43:20.155353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:19.754 [2024-07-25 09:43:20.155361] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155368] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.754 [2024-07-25 09:43:20.155375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:19.754 [2024-07-25 09:43:20.155383] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:19.754 [2024-07-25 09:43:20.155390] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:19.754 [2024-07-25 09:43:20.155397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:19.754 [2024-07-25 09:43:20.155404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:19.754 [2024-07-25 09:43:20.155411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:19.754 [2024-07-25 09:43:20.155426] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:19.754 [2024-07-25 09:43:20.155433] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155440] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:19.754 [2024-07-25 09:43:20.155448] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:19.754 [2024-07-25 09:43:20.155456] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:19.754 [2024-07-25 09:43:20.155463] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:19.754 [2024-07-25 09:43:20.155472] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:19.754 [2024-07-25 09:43:20.155479] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:19.754 [2024-07-25 09:43:20.155487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:19.754 [2024-07-25 09:43:20.155494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:19.754 [2024-07-25 09:43:20.155501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:19.754 [2024-07-25 09:43:20.155509] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:19.754 [2024-07-25 09:43:20.155517] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:19.754 [2024-07-25 09:43:20.155527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:19.754 [2024-07-25 09:43:20.155544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:19.754 [2024-07-25 09:43:20.155552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:19.754 [2024-07-25 09:43:20.155559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:19.754 [2024-07-25 09:43:20.155567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:19.754 [2024-07-25 09:43:20.155575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:19.754 [2024-07-25 09:43:20.155583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:19.754 [2024-07-25 09:43:20.155590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:19.754 [2024-07-25 09:43:20.155598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:19.754 [2024-07-25 09:43:20.155605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:19.754 [2024-07-25 09:43:20.155644] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:19.754 [2024-07-25 09:43:20.155653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:19.754 [2024-07-25 09:43:20.155674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:19.754 [2024-07-25 09:43:20.155682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:19.754 [2024-07-25 09:43:20.155691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:19.754 [2024-07-25 09:43:20.155700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.155709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:19.754 [2024-07-25 09:43:20.155717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:29:19.754 [2024-07-25 09:43:20.155724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.206764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.206821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:19.754 [2024-07-25 09:43:20.206835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.082 ms 00:29:19.754 [2024-07-25 09:43:20.206859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.206957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.206966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:19.754 [2024-07-25 09:43:20.206974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:29:19.754 [2024-07-25 09:43:20.206981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.259044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.259082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:19.754 [2024-07-25 09:43:20.259094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.093 ms 00:29:19.754 [2024-07-25 09:43:20.259101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.259155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.259163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:19.754 [2024-07-25 09:43:20.259171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:19.754 [2024-07-25 09:43:20.259181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.259714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.259733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:19.754 [2024-07-25 09:43:20.259742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:29:19.754 [2024-07-25 09:43:20.259750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.259875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.259888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:19.754 [2024-07-25 09:43:20.259897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:29:19.754 [2024-07-25 09:43:20.259905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.280156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.280194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:19.754 [2024-07-25 09:43:20.280205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.250 ms 00:29:19.754 [2024-07-25 09:43:20.280216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.299428] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:19.754 [2024-07-25 09:43:20.299467] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:19.754 [2024-07-25 09:43:20.299479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.299486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:19.754 [2024-07-25 09:43:20.299494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.134 ms 00:29:19.754 [2024-07-25 09:43:20.299501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.328950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.328985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:19.754 [2024-07-25 09:43:20.329000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.465 ms 00:29:19.754 [2024-07-25 09:43:20.329007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:19.754 [2024-07-25 09:43:20.348696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:19.754 [2024-07-25 09:43:20.348750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:19.754 [2024-07-25 09:43:20.348763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.685 ms 00:29:19.754 [2024-07-25 09:43:20.348771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.368128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.368164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:20.014 [2024-07-25 09:43:20.368174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.348 ms 00:29:20.014 [2024-07-25 09:43:20.368181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.368957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.368991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:20.014 [2024-07-25 09:43:20.369001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:29:20.014 [2024-07-25 09:43:20.369008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.456574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.456649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:20.014 [2024-07-25 09:43:20.456664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.712 ms 00:29:20.014 [2024-07-25 09:43:20.456672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.468653] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:20.014 [2024-07-25 09:43:20.471960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.471989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:20.014 [2024-07-25 09:43:20.472000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.249 ms 00:29:20.014 [2024-07-25 09:43:20.472009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.472103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.472114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:20.014 [2024-07-25 09:43:20.472122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:20.014 [2024-07-25 09:43:20.472129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.472209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.472221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:20.014 [2024-07-25 09:43:20.472242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:20.014 [2024-07-25 09:43:20.472250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.472269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.472277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:20.014 [2024-07-25 09:43:20.472284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:20.014 [2024-07-25 09:43:20.472290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.472319] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:20.014 [2024-07-25 09:43:20.472328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.472335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:20.014 [2024-07-25 09:43:20.472345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:20.014 [2024-07-25 09:43:20.472368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.510318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.510354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:20.014 [2024-07-25 09:43:20.510366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.006 ms 00:29:20.014 [2024-07-25 09:43:20.510373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.510447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:20.014 [2024-07-25 09:43:20.510458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:20.014 [2024-07-25 09:43:20.510466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:20.014 [2024-07-25 09:43:20.510473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:20.014 [2024-07-25 09:43:20.511655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.105 ms, result 0 00:29:53.229  Copying: 30/1024 [MB] (30 MBps) Copying: 61/1024 [MB] (31 MBps) Copying: 90/1024 [MB] (29 MBps) Copying: 120/1024 [MB] (29 MBps) Copying: 150/1024 [MB] (30 MBps) Copying: 179/1024 [MB] (29 MBps) Copying: 208/1024 [MB] (29 MBps) Copying: 238/1024 [MB] (30 MBps) Copying: 268/1024 [MB] (29 MBps) Copying: 296/1024 [MB] (28 MBps) Copying: 325/1024 [MB] (29 MBps) Copying: 355/1024 [MB] (29 MBps) Copying: 385/1024 [MB] (29 MBps) Copying: 414/1024 [MB] (28 MBps) Copying: 443/1024 [MB] (29 MBps) Copying: 474/1024 [MB] (30 MBps) Copying: 505/1024 [MB] (31 MBps) Copying: 536/1024 [MB] (31 MBps) Copying: 567/1024 [MB] (31 MBps) Copying: 599/1024 [MB] (31 MBps) Copying: 630/1024 [MB] (30 MBps) Copying: 661/1024 [MB] (31 MBps) Copying: 692/1024 [MB] (31 MBps) Copying: 722/1024 [MB] (30 MBps) Copying: 756/1024 [MB] (33 MBps) Copying: 789/1024 [MB] (32 MBps) Copying: 822/1024 [MB] (32 MBps) Copying: 854/1024 [MB] (32 MBps) Copying: 886/1024 [MB] (32 MBps) Copying: 919/1024 [MB] (32 MBps) Copying: 951/1024 [MB] (32 MBps) Copying: 982/1024 [MB] (31 MBps) Copying: 1014/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-25 09:43:53.756626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.229 [2024-07-25 09:43:53.756724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:53.229 [2024-07-25 09:43:53.756776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:53.229 [2024-07-25 09:43:53.756818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.229 [2024-07-25 09:43:53.756873] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:53.229 [2024-07-25 09:43:53.761556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.229 [2024-07-25 09:43:53.761654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:53.229 [2024-07-25 09:43:53.761692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:29:53.229 [2024-07-25 09:43:53.761743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.229 [2024-07-25 09:43:53.763701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.229 [2024-07-25 09:43:53.763819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:53.230 [2024-07-25 09:43:53.763873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.895 ms 00:29:53.230 [2024-07-25 09:43:53.763909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.230 [2024-07-25 09:43:53.783494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.230 [2024-07-25 09:43:53.783637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:53.230 [2024-07-25 09:43:53.783675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.551 ms 00:29:53.230 [2024-07-25 09:43:53.783709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.230 [2024-07-25 09:43:53.789714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.230 [2024-07-25 09:43:53.789774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:53.230 [2024-07-25 09:43:53.789786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.958 ms 00:29:53.230 [2024-07-25 09:43:53.789794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.230 [2024-07-25 09:43:53.838385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.230 [2024-07-25 09:43:53.838500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:53.230 [2024-07-25 09:43:53.838524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.599 ms 00:29:53.230 [2024-07-25 09:43:53.838536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:53.866312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.491 [2024-07-25 09:43:53.866390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:53.491 [2024-07-25 09:43:53.866407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.729 ms 00:29:53.491 [2024-07-25 09:43:53.866416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:53.866614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.491 [2024-07-25 09:43:53.866627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:53.491 [2024-07-25 09:43:53.866637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:29:53.491 [2024-07-25 09:43:53.866649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:53.914364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.491 [2024-07-25 09:43:53.914438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:29:53.491 [2024-07-25 09:43:53.914453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.787 ms 00:29:53.491 [2024-07-25 09:43:53.914462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:53.961492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.491 [2024-07-25 09:43:53.961572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:29:53.491 [2024-07-25 09:43:53.961587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.034 ms 00:29:53.491 [2024-07-25 09:43:53.961595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:54.007792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.491 [2024-07-25 09:43:54.007863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:53.491 [2024-07-25 09:43:54.007878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.197 ms 00:29:53.491 [2024-07-25 09:43:54.007908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:54.053976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.491 [2024-07-25 09:43:54.054057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:53.491 [2024-07-25 09:43:54.054072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.018 ms 00:29:53.491 [2024-07-25 09:43:54.054080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.491 [2024-07-25 09:43:54.054156] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:53.491 [2024-07-25 09:43:54.054172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:53.491 [2024-07-25 09:43:54.054280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.054992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:53.492 [2024-07-25 09:43:54.055339] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:53.493 [2024-07-25 09:43:54.055348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3be97080-f2b7-443f-968a-fc5f7b226e09 00:29:53.493 [2024-07-25 09:43:54.055357] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:53.493 [2024-07-25 09:43:54.055371] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:53.493 [2024-07-25 09:43:54.055380] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:53.493 [2024-07-25 09:43:54.055389] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:53.493 [2024-07-25 09:43:54.055396] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:53.493 [2024-07-25 09:43:54.055404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:53.493 [2024-07-25 09:43:54.055412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:53.493 [2024-07-25 09:43:54.055420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:53.493 [2024-07-25 09:43:54.055427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:53.493 [2024-07-25 09:43:54.055437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.493 [2024-07-25 09:43:54.055445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:53.493 [2024-07-25 09:43:54.055455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.284 ms 00:29:53.493 [2024-07-25 09:43:54.055466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.493 [2024-07-25 09:43:54.078111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.493 [2024-07-25 09:43:54.078176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:53.493 [2024-07-25 09:43:54.078192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.621 ms 00:29:53.493 [2024-07-25 09:43:54.078214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.493 [2024-07-25 09:43:54.078808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.493 [2024-07-25 09:43:54.078829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:53.493 [2024-07-25 09:43:54.078839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:29:53.493 [2024-07-25 09:43:54.078847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.753 [2024-07-25 09:43:54.131339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.753 [2024-07-25 09:43:54.131404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:53.753 [2024-07-25 09:43:54.131420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.753 [2024-07-25 09:43:54.131429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.753 [2024-07-25 09:43:54.131507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.753 [2024-07-25 09:43:54.131516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:53.753 [2024-07-25 09:43:54.131525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.753 [2024-07-25 09:43:54.131533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.753 [2024-07-25 09:43:54.131620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.753 [2024-07-25 09:43:54.131633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:53.753 [2024-07-25 09:43:54.131642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.753 [2024-07-25 09:43:54.131650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.753 [2024-07-25 09:43:54.131668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.753 [2024-07-25 09:43:54.131677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:53.753 [2024-07-25 09:43:54.131685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.753 [2024-07-25 09:43:54.131693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.753 [2024-07-25 09:43:54.267537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:53.753 [2024-07-25 09:43:54.267620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:53.753 [2024-07-25 09:43:54.267640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:53.753 [2024-07-25 09:43:54.267652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.381958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:54.013 [2024-07-25 09:43:54.382041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:54.013 [2024-07-25 09:43:54.382165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:54.013 [2024-07-25 09:43:54.382226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:54.013 [2024-07-25 09:43:54.382389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:54.013 [2024-07-25 09:43:54.382447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:54.013 [2024-07-25 09:43:54.382524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:54.013 [2024-07-25 09:43:54.382594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:54.013 [2024-07-25 09:43:54.382603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:54.013 [2024-07-25 09:43:54.382610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:54.013 [2024-07-25 09:43:54.382731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 627.280 ms, result 0 00:29:56.551 00:29:56.551 00:29:56.551 09:43:56 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:29:56.551 [2024-07-25 09:43:56.782166] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:56.551 [2024-07-25 09:43:56.782413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:29:56.551 [2024-07-25 09:43:56.931256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.810 [2024-07-25 09:43:57.181610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.069 [2024-07-25 09:43:57.619924] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:57.069 [2024-07-25 09:43:57.620123] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:57.330 [2024-07-25 09:43:57.779459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.779657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:57.330 [2024-07-25 09:43:57.779717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:57.330 [2024-07-25 09:43:57.779754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.779875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.779969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:57.330 [2024-07-25 09:43:57.780029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:57.330 [2024-07-25 09:43:57.780085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.780157] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:57.330 [2024-07-25 09:43:57.781516] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:57.330 [2024-07-25 09:43:57.781661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.781752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:57.330 [2024-07-25 09:43:57.781816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:29:57.330 [2024-07-25 09:43:57.781869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.784057] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:57.330 [2024-07-25 09:43:57.805247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.805418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:57.330 [2024-07-25 09:43:57.805456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.229 ms 00:29:57.330 [2024-07-25 09:43:57.805480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.805630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.805669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:57.330 [2024-07-25 09:43:57.805709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:29:57.330 [2024-07-25 09:43:57.805740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.813694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.813830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:57.330 [2024-07-25 09:43:57.813863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.788 ms 00:29:57.330 [2024-07-25 09:43:57.813886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.814005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.814039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:57.330 [2024-07-25 09:43:57.814071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:29:57.330 [2024-07-25 09:43:57.814094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.814195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.814255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:57.330 [2024-07-25 09:43:57.814303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:29:57.330 [2024-07-25 09:43:57.814333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.814383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:57.330 [2024-07-25 09:43:57.820604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.820713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:57.330 [2024-07-25 09:43:57.820751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.242 ms 00:29:57.330 [2024-07-25 09:43:57.820778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.820875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.820927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:57.330 [2024-07-25 09:43:57.820958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:57.330 [2024-07-25 09:43:57.820991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.821086] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:57.330 [2024-07-25 09:43:57.821142] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:57.330 [2024-07-25 09:43:57.821219] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:57.330 [2024-07-25 09:43:57.821301] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:29:57.330 [2024-07-25 09:43:57.821436] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:57.330 [2024-07-25 09:43:57.821485] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:57.330 [2024-07-25 09:43:57.821530] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:29:57.330 [2024-07-25 09:43:57.821577] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:57.330 [2024-07-25 09:43:57.821624] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:57.330 [2024-07-25 09:43:57.821678] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:57.330 [2024-07-25 09:43:57.821709] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:57.330 [2024-07-25 09:43:57.821719] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:57.330 [2024-07-25 09:43:57.821728] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:57.330 [2024-07-25 09:43:57.821741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.821751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:57.330 [2024-07-25 09:43:57.821761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:29:57.330 [2024-07-25 09:43:57.821769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.821856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.330 [2024-07-25 09:43:57.821868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:57.330 [2024-07-25 09:43:57.821877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:29:57.330 [2024-07-25 09:43:57.821885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.330 [2024-07-25 09:43:57.821979] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:57.331 [2024-07-25 09:43:57.821994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:57.331 [2024-07-25 09:43:57.822004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:57.331 [2024-07-25 09:43:57.822030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822039] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822047] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:57.331 [2024-07-25 09:43:57.822055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822063] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.331 [2024-07-25 09:43:57.822073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:57.331 [2024-07-25 09:43:57.822081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:57.331 [2024-07-25 09:43:57.822088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:57.331 [2024-07-25 09:43:57.822096] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:57.331 [2024-07-25 09:43:57.822103] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:57.331 [2024-07-25 09:43:57.822111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:57.331 [2024-07-25 09:43:57.822127] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822134] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:57.331 [2024-07-25 09:43:57.822167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822176] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:57.331 [2024-07-25 09:43:57.822191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822198] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:57.331 [2024-07-25 09:43:57.822214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822222] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:57.331 [2024-07-25 09:43:57.822251] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:57.331 [2024-07-25 09:43:57.822274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822282] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.331 [2024-07-25 09:43:57.822289] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:57.331 [2024-07-25 09:43:57.822297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:57.331 [2024-07-25 09:43:57.822304] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:57.331 [2024-07-25 09:43:57.822311] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:57.331 [2024-07-25 09:43:57.822319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:57.331 [2024-07-25 09:43:57.822326] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822333] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:57.331 [2024-07-25 09:43:57.822341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:57.331 [2024-07-25 09:43:57.822348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822355] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:57.331 [2024-07-25 09:43:57.822364] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:57.331 [2024-07-25 09:43:57.822372] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822380] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:57.331 [2024-07-25 09:43:57.822388] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:57.331 [2024-07-25 09:43:57.822396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:57.331 [2024-07-25 09:43:57.822403] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:57.331 [2024-07-25 09:43:57.822410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:57.331 [2024-07-25 09:43:57.822417] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:57.331 [2024-07-25 09:43:57.822424] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:57.331 [2024-07-25 09:43:57.822434] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:57.331 [2024-07-25 09:43:57.822444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:57.331 [2024-07-25 09:43:57.822462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:57.331 [2024-07-25 09:43:57.822471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:57.331 [2024-07-25 09:43:57.822479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:57.331 [2024-07-25 09:43:57.822487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:57.331 [2024-07-25 09:43:57.822496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:57.331 [2024-07-25 09:43:57.822505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:57.331 [2024-07-25 09:43:57.822513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:57.331 [2024-07-25 09:43:57.822521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:57.331 [2024-07-25 09:43:57.822529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:57.331 [2024-07-25 09:43:57.822569] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:57.331 [2024-07-25 09:43:57.822580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:57.331 [2024-07-25 09:43:57.822597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:57.331 [2024-07-25 09:43:57.822606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:57.331 [2024-07-25 09:43:57.822614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:57.331 [2024-07-25 09:43:57.822625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.331 [2024-07-25 09:43:57.822634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:57.331 [2024-07-25 09:43:57.822644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:29:57.331 [2024-07-25 09:43:57.822652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.331 [2024-07-25 09:43:57.882008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.331 [2024-07-25 09:43:57.882167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:57.331 [2024-07-25 09:43:57.882211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.407 ms 00:29:57.331 [2024-07-25 09:43:57.882268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.331 [2024-07-25 09:43:57.882407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.331 [2024-07-25 09:43:57.882448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:57.331 [2024-07-25 09:43:57.882481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:29:57.331 [2024-07-25 09:43:57.882513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.331 [2024-07-25 09:43:57.939013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.331 [2024-07-25 09:43:57.939176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:57.331 [2024-07-25 09:43:57.939214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.470 ms 00:29:57.331 [2024-07-25 09:43:57.939253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.331 [2024-07-25 09:43:57.939339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.331 [2024-07-25 09:43:57.939386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:57.331 [2024-07-25 09:43:57.939429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:57.331 [2024-07-25 09:43:57.939462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.331 [2024-07-25 09:43:57.939996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.331 [2024-07-25 09:43:57.940067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:57.331 [2024-07-25 09:43:57.940105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:29:57.331 [2024-07-25 09:43:57.940135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.332 [2024-07-25 09:43:57.940337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.332 [2024-07-25 09:43:57.940392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:57.332 [2024-07-25 09:43:57.940426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:29:57.332 [2024-07-25 09:43:57.940461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:57.963730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:57.963896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:57.591 [2024-07-25 09:43:57.963938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.263 ms 00:29:57.591 [2024-07-25 09:43:57.963963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:57.987181] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:57.591 [2024-07-25 09:43:57.987364] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:57.591 [2024-07-25 09:43:57.987421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:57.987445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:57.591 [2024-07-25 09:43:57.987470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.311 ms 00:29:57.591 [2024-07-25 09:43:57.987492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:58.023388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:58.023635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:57.591 [2024-07-25 09:43:58.023700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.826 ms 00:29:57.591 [2024-07-25 09:43:58.023738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:58.047769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:58.047947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:57.591 [2024-07-25 09:43:58.047984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.983 ms 00:29:57.591 [2024-07-25 09:43:58.048007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:58.070787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:58.070946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:57.591 [2024-07-25 09:43:58.070981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.711 ms 00:29:57.591 [2024-07-25 09:43:58.071003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:58.072012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:58.072112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:57.591 [2024-07-25 09:43:58.072151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:29:57.591 [2024-07-25 09:43:58.072196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:58.177404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.591 [2024-07-25 09:43:58.177575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:57.591 [2024-07-25 09:43:58.177625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.335 ms 00:29:57.591 [2024-07-25 09:43:58.177650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.591 [2024-07-25 09:43:58.193687] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:57.592 [2024-07-25 09:43:58.197495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.592 [2024-07-25 09:43:58.197639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:57.592 [2024-07-25 09:43:58.197676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.763 ms 00:29:57.592 [2024-07-25 09:43:58.197702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.592 [2024-07-25 09:43:58.197844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.592 [2024-07-25 09:43:58.197895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:57.592 [2024-07-25 09:43:58.197942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:57.592 [2024-07-25 09:43:58.197978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.592 [2024-07-25 09:43:58.198102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.592 [2024-07-25 09:43:58.198147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:57.592 [2024-07-25 09:43:58.198181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:29:57.592 [2024-07-25 09:43:58.198216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.592 [2024-07-25 09:43:58.198306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.592 [2024-07-25 09:43:58.198347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:57.592 [2024-07-25 09:43:58.198379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:57.592 [2024-07-25 09:43:58.198403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.592 [2024-07-25 09:43:58.198460] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:57.592 [2024-07-25 09:43:58.198496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.592 [2024-07-25 09:43:58.198525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:57.592 [2024-07-25 09:43:58.198554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:57.592 [2024-07-25 09:43:58.198587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.851 [2024-07-25 09:43:58.244906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.851 [2024-07-25 09:43:58.245073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:57.851 [2024-07-25 09:43:58.245120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.357 ms 00:29:57.851 [2024-07-25 09:43:58.245149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.851 [2024-07-25 09:43:58.245296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:57.851 [2024-07-25 09:43:58.245344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:57.851 [2024-07-25 09:43:58.245358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:29:57.851 [2024-07-25 09:43:58.245368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:57.851 [2024-07-25 09:43:58.246839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 467.655 ms, result 0 00:30:27.320  Copying: 33/1024 [MB] (33 MBps) Copying: 67/1024 [MB] (33 MBps) Copying: 102/1024 [MB] (34 MBps) Copying: 136/1024 [MB] (34 MBps) Copying: 172/1024 [MB] (35 MBps) Copying: 206/1024 [MB] (33 MBps) Copying: 241/1024 [MB] (35 MBps) Copying: 277/1024 [MB] (36 MBps) Copying: 313/1024 [MB] (35 MBps) Copying: 350/1024 [MB] (36 MBps) Copying: 386/1024 [MB] (35 MBps) Copying: 422/1024 [MB] (35 MBps) Copying: 456/1024 [MB] (34 MBps) Copying: 489/1024 [MB] (32 MBps) Copying: 522/1024 [MB] (33 MBps) Copying: 556/1024 [MB] (34 MBps) Copying: 593/1024 [MB] (36 MBps) Copying: 629/1024 [MB] (35 MBps) Copying: 663/1024 [MB] (34 MBps) Copying: 697/1024 [MB] (34 MBps) Copying: 732/1024 [MB] (35 MBps) Copying: 768/1024 [MB] (35 MBps) Copying: 803/1024 [MB] (35 MBps) Copying: 838/1024 [MB] (35 MBps) Copying: 873/1024 [MB] (34 MBps) Copying: 909/1024 [MB] (35 MBps) Copying: 946/1024 [MB] (37 MBps) Copying: 982/1024 [MB] (36 MBps) Copying: 1019/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-07-25 09:44:27.868050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.320 [2024-07-25 09:44:27.868140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:27.320 [2024-07-25 09:44:27.868157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:27.320 [2024-07-25 09:44:27.868167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.320 [2024-07-25 09:44:27.868194] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:27.320 [2024-07-25 09:44:27.873615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.320 [2024-07-25 09:44:27.873696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:27.320 [2024-07-25 09:44:27.873725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.407 ms 00:30:27.320 [2024-07-25 09:44:27.873735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.320 [2024-07-25 09:44:27.873993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.320 [2024-07-25 09:44:27.874010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:27.320 [2024-07-25 09:44:27.874021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:30:27.320 [2024-07-25 09:44:27.874030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.320 [2024-07-25 09:44:27.877857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.320 [2024-07-25 09:44:27.877901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:27.320 [2024-07-25 09:44:27.877913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.817 ms 00:30:27.320 [2024-07-25 09:44:27.877923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.320 [2024-07-25 09:44:27.885075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.320 [2024-07-25 09:44:27.885142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:27.320 [2024-07-25 09:44:27.885155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.125 ms 00:30:27.320 [2024-07-25 09:44:27.885167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:27.935804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:27.935885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:27.581 [2024-07-25 09:44:27.935902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.591 ms 00:30:27.581 [2024-07-25 09:44:27.935911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:27.963720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:27.963802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:27.581 [2024-07-25 09:44:27.963818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.806 ms 00:30:27.581 [2024-07-25 09:44:27.963828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:27.964011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:27.964033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:27.581 [2024-07-25 09:44:27.964044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:30:27.581 [2024-07-25 09:44:27.964053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:28.016422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:28.016500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:30:27.581 [2024-07-25 09:44:28.016517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.447 ms 00:30:27.581 [2024-07-25 09:44:28.016526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:28.067839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:28.067918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:30:27.581 [2024-07-25 09:44:28.067934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.360 ms 00:30:27.581 [2024-07-25 09:44:28.067944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:28.117504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:28.117585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:27.581 [2024-07-25 09:44:28.117631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.600 ms 00:30:27.581 [2024-07-25 09:44:28.117640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.581 [2024-07-25 09:44:28.167975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.581 [2024-07-25 09:44:28.168037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:27.582 [2024-07-25 09:44:28.168052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.318 ms 00:30:27.582 [2024-07-25 09:44:28.168061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.582 [2024-07-25 09:44:28.168106] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:27.582 [2024-07-25 09:44:28.168134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:27.582 [2024-07-25 09:44:28.168755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.168992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:27.583 [2024-07-25 09:44:28.169073] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:27.583 [2024-07-25 09:44:28.169085] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3be97080-f2b7-443f-968a-fc5f7b226e09 00:30:27.583 [2024-07-25 09:44:28.169095] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:27.583 [2024-07-25 09:44:28.169103] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:27.583 [2024-07-25 09:44:28.169111] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:27.583 [2024-07-25 09:44:28.169120] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:27.583 [2024-07-25 09:44:28.169128] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:27.583 [2024-07-25 09:44:28.169137] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:27.583 [2024-07-25 09:44:28.169146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:27.583 [2024-07-25 09:44:28.169153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:27.583 [2024-07-25 09:44:28.169160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:27.583 [2024-07-25 09:44:28.169170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.583 [2024-07-25 09:44:28.169182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:27.583 [2024-07-25 09:44:28.169191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:30:27.583 [2024-07-25 09:44:28.169199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.583 [2024-07-25 09:44:28.193305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.583 [2024-07-25 09:44:28.193374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:27.583 [2024-07-25 09:44:28.193408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.070 ms 00:30:27.583 [2024-07-25 09:44:28.193418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.583 [2024-07-25 09:44:28.193977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:27.583 [2024-07-25 09:44:28.194001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:27.583 [2024-07-25 09:44:28.194012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:30:27.583 [2024-07-25 09:44:28.194029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.844 [2024-07-25 09:44:28.247470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.844 [2024-07-25 09:44:28.247536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:27.844 [2024-07-25 09:44:28.247551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.844 [2024-07-25 09:44:28.247560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.844 [2024-07-25 09:44:28.247639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.844 [2024-07-25 09:44:28.247649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:27.844 [2024-07-25 09:44:28.247659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.844 [2024-07-25 09:44:28.247675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.844 [2024-07-25 09:44:28.247758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.844 [2024-07-25 09:44:28.247774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:27.844 [2024-07-25 09:44:28.247785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.844 [2024-07-25 09:44:28.247794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.844 [2024-07-25 09:44:28.247812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.844 [2024-07-25 09:44:28.247822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:27.844 [2024-07-25 09:44:28.247831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.844 [2024-07-25 09:44:28.247840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:27.844 [2024-07-25 09:44:28.380085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:27.844 [2024-07-25 09:44:28.380157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:27.844 [2024-07-25 09:44:28.380172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:27.844 [2024-07-25 09:44:28.380180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:28.104 [2024-07-25 09:44:28.490090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:28.104 [2024-07-25 09:44:28.490215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:28.104 [2024-07-25 09:44:28.490328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:28.104 [2024-07-25 09:44:28.490467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:28.104 [2024-07-25 09:44:28.490533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:28.104 [2024-07-25 09:44:28.490614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:28.104 [2024-07-25 09:44:28.490685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:28.104 [2024-07-25 09:44:28.490695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:28.104 [2024-07-25 09:44:28.490702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:28.104 [2024-07-25 09:44:28.490831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 624.182 ms, result 0 00:30:29.484 00:30:29.484 00:30:29.484 09:44:29 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:31.387 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:31.387 09:44:31 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:30:31.387 [2024-07-25 09:44:31.977072] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:31.387 [2024-07-25 09:44:31.977209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82022 ] 00:30:31.646 [2024-07-25 09:44:32.129289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.903 [2024-07-25 09:44:32.406558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.472 [2024-07-25 09:44:32.873283] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:32.472 [2024-07-25 09:44:32.873360] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:32.472 [2024-07-25 09:44:33.030990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.472 [2024-07-25 09:44:33.031060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:32.472 [2024-07-25 09:44:33.031076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:32.472 [2024-07-25 09:44:33.031085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.472 [2024-07-25 09:44:33.031147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.472 [2024-07-25 09:44:33.031158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:32.472 [2024-07-25 09:44:33.031168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:32.472 [2024-07-25 09:44:33.031179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.472 [2024-07-25 09:44:33.031203] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:32.472 [2024-07-25 09:44:33.032660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:32.472 [2024-07-25 09:44:33.032700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.472 [2024-07-25 09:44:33.032710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:32.472 [2024-07-25 09:44:33.032721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:30:32.472 [2024-07-25 09:44:33.032730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.472 [2024-07-25 09:44:33.034287] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:32.472 [2024-07-25 09:44:33.058407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.472 [2024-07-25 09:44:33.058471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:32.472 [2024-07-25 09:44:33.058487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.164 ms 00:30:32.472 [2024-07-25 09:44:33.058496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.472 [2024-07-25 09:44:33.058612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.472 [2024-07-25 09:44:33.058627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:32.472 [2024-07-25 09:44:33.058637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:30:32.472 [2024-07-25 09:44:33.058646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.472 [2024-07-25 09:44:33.066425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.066466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:32.473 [2024-07-25 09:44:33.066477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.690 ms 00:30:32.473 [2024-07-25 09:44:33.066485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.066578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.066596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:32.473 [2024-07-25 09:44:33.066605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:30:32.473 [2024-07-25 09:44:33.066613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.066673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.066683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:32.473 [2024-07-25 09:44:33.066691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:32.473 [2024-07-25 09:44:33.066699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.066725] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:32.473 [2024-07-25 09:44:33.072711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.072756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:32.473 [2024-07-25 09:44:33.072767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.005 ms 00:30:32.473 [2024-07-25 09:44:33.072776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.072818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.072828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:32.473 [2024-07-25 09:44:33.072837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:30:32.473 [2024-07-25 09:44:33.072845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.072912] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:32.473 [2024-07-25 09:44:33.072936] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:32.473 [2024-07-25 09:44:33.072973] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:32.473 [2024-07-25 09:44:33.072994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:30:32.473 [2024-07-25 09:44:33.073089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:32.473 [2024-07-25 09:44:33.073102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:32.473 [2024-07-25 09:44:33.073114] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:30:32.473 [2024-07-25 09:44:33.073126] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073136] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073146] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:32.473 [2024-07-25 09:44:33.073155] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:32.473 [2024-07-25 09:44:33.073164] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:32.473 [2024-07-25 09:44:33.073172] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:32.473 [2024-07-25 09:44:33.073181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.073193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:32.473 [2024-07-25 09:44:33.073203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:30:32.473 [2024-07-25 09:44:33.073211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.073311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.473 [2024-07-25 09:44:33.073324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:32.473 [2024-07-25 09:44:33.073333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:32.473 [2024-07-25 09:44:33.073340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.473 [2024-07-25 09:44:33.073474] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:32.473 [2024-07-25 09:44:33.073488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:32.473 [2024-07-25 09:44:33.073501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:32.473 [2024-07-25 09:44:33.073529] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073537] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:32.473 [2024-07-25 09:44:33.073554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073573] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:32.473 [2024-07-25 09:44:33.073581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:32.473 [2024-07-25 09:44:33.073589] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:32.473 [2024-07-25 09:44:33.073596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:32.473 [2024-07-25 09:44:33.073603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:32.473 [2024-07-25 09:44:33.073611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:32.473 [2024-07-25 09:44:33.073618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:32.473 [2024-07-25 09:44:33.073645] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073652] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:32.473 [2024-07-25 09:44:33.073682] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:32.473 [2024-07-25 09:44:33.073705] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:32.473 [2024-07-25 09:44:33.073726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073732] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:32.473 [2024-07-25 09:44:33.073745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:32.473 [2024-07-25 09:44:33.073764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:32.473 [2024-07-25 09:44:33.073776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:32.473 [2024-07-25 09:44:33.073783] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:32.473 [2024-07-25 09:44:33.073806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:32.473 [2024-07-25 09:44:33.073814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:32.473 [2024-07-25 09:44:33.073821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:32.473 [2024-07-25 09:44:33.073827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:32.473 [2024-07-25 09:44:33.073843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:32.473 [2024-07-25 09:44:33.073850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073857] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:32.473 [2024-07-25 09:44:33.073864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:32.473 [2024-07-25 09:44:33.073872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:32.473 [2024-07-25 09:44:33.073887] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:32.473 [2024-07-25 09:44:33.073895] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:32.473 [2024-07-25 09:44:33.073902] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:32.473 [2024-07-25 09:44:33.073909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:32.473 [2024-07-25 09:44:33.073916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:32.473 [2024-07-25 09:44:33.073923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:32.473 [2024-07-25 09:44:33.073932] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:32.473 [2024-07-25 09:44:33.073942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:32.473 [2024-07-25 09:44:33.073951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:32.473 [2024-07-25 09:44:33.073960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:32.473 [2024-07-25 09:44:33.073968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:32.473 [2024-07-25 09:44:33.073976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:32.474 [2024-07-25 09:44:33.073983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:32.474 [2024-07-25 09:44:33.073992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:32.474 [2024-07-25 09:44:33.074000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:32.474 [2024-07-25 09:44:33.074008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:32.474 [2024-07-25 09:44:33.074015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:32.474 [2024-07-25 09:44:33.074023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:32.474 [2024-07-25 09:44:33.074031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:32.474 [2024-07-25 09:44:33.074038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:32.474 [2024-07-25 09:44:33.074048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:32.474 [2024-07-25 09:44:33.074056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:32.474 [2024-07-25 09:44:33.074063] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:32.474 [2024-07-25 09:44:33.074073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:32.474 [2024-07-25 09:44:33.074085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:32.474 [2024-07-25 09:44:33.074094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:32.474 [2024-07-25 09:44:33.074102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:32.474 [2024-07-25 09:44:33.074110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:32.474 [2024-07-25 09:44:33.074119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.474 [2024-07-25 09:44:33.074127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:32.474 [2024-07-25 09:44:33.074136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:30:32.474 [2024-07-25 09:44:33.074143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.794 [2024-07-25 09:44:33.130907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.794 [2024-07-25 09:44:33.130966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:32.794 [2024-07-25 09:44:33.130981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.815 ms 00:30:32.794 [2024-07-25 09:44:33.130991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.131105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.131115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:32.795 [2024-07-25 09:44:33.131125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:30:32.795 [2024-07-25 09:44:33.131132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.188215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.188285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:32.795 [2024-07-25 09:44:33.188300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.094 ms 00:30:32.795 [2024-07-25 09:44:33.188310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.188378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.188392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:32.795 [2024-07-25 09:44:33.188407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:32.795 [2024-07-25 09:44:33.188425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.188943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.188961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:32.795 [2024-07-25 09:44:33.188973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:30:32.795 [2024-07-25 09:44:33.188982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.189116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.189134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:32.795 [2024-07-25 09:44:33.189145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:30:32.795 [2024-07-25 09:44:33.189155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.213548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.213689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:32.795 [2024-07-25 09:44:33.213727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.408 ms 00:30:32.795 [2024-07-25 09:44:33.213755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.236898] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:32.795 [2024-07-25 09:44:33.237071] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:32.795 [2024-07-25 09:44:33.237120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.237143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:32.795 [2024-07-25 09:44:33.237168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.238 ms 00:30:32.795 [2024-07-25 09:44:33.237190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.269991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.270136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:32.795 [2024-07-25 09:44:33.270171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.718 ms 00:30:32.795 [2024-07-25 09:44:33.270192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.293423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.293534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:32.795 [2024-07-25 09:44:33.293566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.148 ms 00:30:32.795 [2024-07-25 09:44:33.293587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.315771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.315900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:32.795 [2024-07-25 09:44:33.315932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.127 ms 00:30:32.795 [2024-07-25 09:44:33.315952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:32.795 [2024-07-25 09:44:33.317028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:32.795 [2024-07-25 09:44:33.317093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:32.795 [2024-07-25 09:44:33.317129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:30:32.795 [2024-07-25 09:44:33.317195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.411266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.411430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:33.053 [2024-07-25 09:44:33.411465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.179 ms 00:30:33.053 [2024-07-25 09:44:33.411497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.426330] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:33.053 [2024-07-25 09:44:33.429831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.429950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:33.053 [2024-07-25 09:44:33.429979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.276 ms 00:30:33.053 [2024-07-25 09:44:33.429998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.430124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.430151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:33.053 [2024-07-25 09:44:33.430195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:33.053 [2024-07-25 09:44:33.430277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.430373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.430411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:33.053 [2024-07-25 09:44:33.430458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:30:33.053 [2024-07-25 09:44:33.430493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.430545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.430581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:33.053 [2024-07-25 09:44:33.430610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:33.053 [2024-07-25 09:44:33.430640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.430700] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:33.053 [2024-07-25 09:44:33.430736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.430766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:33.053 [2024-07-25 09:44:33.430793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:33.053 [2024-07-25 09:44:33.430814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.475686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.475857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:33.053 [2024-07-25 09:44:33.475893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.921 ms 00:30:33.053 [2024-07-25 09:44:33.475922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.476041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:33.053 [2024-07-25 09:44:33.476069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:33.053 [2024-07-25 09:44:33.476094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:30:33.053 [2024-07-25 09:44:33.476114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:33.053 [2024-07-25 09:44:33.477476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 446.807 ms, result 0 00:31:06.501  Copying: 32/1024 [MB] (32 MBps) Copying: 65/1024 [MB] (32 MBps) Copying: 96/1024 [MB] (31 MBps) Copying: 129/1024 [MB] (32 MBps) Copying: 162/1024 [MB] (33 MBps) Copying: 196/1024 [MB] (33 MBps) Copying: 230/1024 [MB] (34 MBps) Copying: 264/1024 [MB] (33 MBps) Copying: 299/1024 [MB] (34 MBps) Copying: 332/1024 [MB] (33 MBps) Copying: 367/1024 [MB] (34 MBps) Copying: 398/1024 [MB] (30 MBps) Copying: 428/1024 [MB] (30 MBps) Copying: 459/1024 [MB] (31 MBps) Copying: 489/1024 [MB] (30 MBps) Copying: 520/1024 [MB] (30 MBps) Copying: 550/1024 [MB] (29 MBps) Copying: 579/1024 [MB] (28 MBps) Copying: 608/1024 [MB] (29 MBps) Copying: 638/1024 [MB] (29 MBps) Copying: 668/1024 [MB] (29 MBps) Copying: 702/1024 [MB] (33 MBps) Copying: 733/1024 [MB] (31 MBps) Copying: 764/1024 [MB] (30 MBps) Copying: 794/1024 [MB] (29 MBps) Copying: 823/1024 [MB] (29 MBps) Copying: 853/1024 [MB] (29 MBps) Copying: 883/1024 [MB] (30 MBps) Copying: 914/1024 [MB] (30 MBps) Copying: 943/1024 [MB] (29 MBps) Copying: 973/1024 [MB] (29 MBps) Copying: 1004/1024 [MB] (30 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-25 09:45:07.028569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.501 [2024-07-25 09:45:07.028715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:06.501 [2024-07-25 09:45:07.028753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:06.501 [2024-07-25 09:45:07.028777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.501 [2024-07-25 09:45:07.030184] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:06.501 [2024-07-25 09:45:07.037503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.501 [2024-07-25 09:45:07.037590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:06.501 [2024-07-25 09:45:07.037624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.192 ms 00:31:06.501 [2024-07-25 09:45:07.037644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.501 [2024-07-25 09:45:07.047524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.501 [2024-07-25 09:45:07.047639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:06.501 [2024-07-25 09:45:07.047672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.049 ms 00:31:06.501 [2024-07-25 09:45:07.047695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.501 [2024-07-25 09:45:07.071450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.501 [2024-07-25 09:45:07.071573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:06.501 [2024-07-25 09:45:07.071593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.754 ms 00:31:06.501 [2024-07-25 09:45:07.071602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.501 [2024-07-25 09:45:07.077682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.501 [2024-07-25 09:45:07.077722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:06.501 [2024-07-25 09:45:07.077734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.049 ms 00:31:06.501 [2024-07-25 09:45:07.077742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.760 [2024-07-25 09:45:07.120553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.760 [2024-07-25 09:45:07.120619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:06.760 [2024-07-25 09:45:07.120635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.825 ms 00:31:06.760 [2024-07-25 09:45:07.120645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.760 [2024-07-25 09:45:07.146045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.760 [2024-07-25 09:45:07.146118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:06.760 [2024-07-25 09:45:07.146132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.376 ms 00:31:06.760 [2024-07-25 09:45:07.146139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.760 [2024-07-25 09:45:07.226180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.760 [2024-07-25 09:45:07.226316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:06.760 [2024-07-25 09:45:07.226334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 80.119 ms 00:31:06.760 [2024-07-25 09:45:07.226343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.760 [2024-07-25 09:45:07.267941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.760 [2024-07-25 09:45:07.267998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:06.760 [2024-07-25 09:45:07.268011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.652 ms 00:31:06.760 [2024-07-25 09:45:07.268034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.760 [2024-07-25 09:45:07.308666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.760 [2024-07-25 09:45:07.308723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:06.760 [2024-07-25 09:45:07.308737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.651 ms 00:31:06.760 [2024-07-25 09:45:07.308745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.761 [2024-07-25 09:45:07.348021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.761 [2024-07-25 09:45:07.348077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:06.761 [2024-07-25 09:45:07.348121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.290 ms 00:31:06.761 [2024-07-25 09:45:07.348130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.020 [2024-07-25 09:45:07.388133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.020 [2024-07-25 09:45:07.388189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:07.020 [2024-07-25 09:45:07.388203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.981 ms 00:31:07.020 [2024-07-25 09:45:07.388210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.020 [2024-07-25 09:45:07.388279] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:07.020 [2024-07-25 09:45:07.388310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 116736 / 261120 wr_cnt: 1 state: open 00:31:07.020 [2024-07-25 09:45:07.388340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:07.020 [2024-07-25 09:45:07.388348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:07.020 [2024-07-25 09:45:07.388356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:07.020 [2024-07-25 09:45:07.388365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:07.020 [2024-07-25 09:45:07.388372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.388993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.389001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.389010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.389019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.389028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:07.021 [2024-07-25 09:45:07.389036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:07.022 [2024-07-25 09:45:07.389099] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:07.022 [2024-07-25 09:45:07.389106] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3be97080-f2b7-443f-968a-fc5f7b226e09 00:31:07.022 [2024-07-25 09:45:07.389114] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 116736 00:31:07.022 [2024-07-25 09:45:07.389121] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 117696 00:31:07.022 [2024-07-25 09:45:07.389128] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 116736 00:31:07.022 [2024-07-25 09:45:07.389140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0082 00:31:07.022 [2024-07-25 09:45:07.389147] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:07.022 [2024-07-25 09:45:07.389155] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:07.022 [2024-07-25 09:45:07.389180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:07.022 [2024-07-25 09:45:07.389188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:07.022 [2024-07-25 09:45:07.389194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:07.022 [2024-07-25 09:45:07.389202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.022 [2024-07-25 09:45:07.389209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:07.022 [2024-07-25 09:45:07.389217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:31:07.022 [2024-07-25 09:45:07.389224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.410693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.022 [2024-07-25 09:45:07.410740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:07.022 [2024-07-25 09:45:07.410766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.462 ms 00:31:07.022 [2024-07-25 09:45:07.410774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.411324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.022 [2024-07-25 09:45:07.411333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:07.022 [2024-07-25 09:45:07.411342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:31:07.022 [2024-07-25 09:45:07.411349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.456073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.022 [2024-07-25 09:45:07.456130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:07.022 [2024-07-25 09:45:07.456147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.022 [2024-07-25 09:45:07.456155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.456221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.022 [2024-07-25 09:45:07.456246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:07.022 [2024-07-25 09:45:07.456270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.022 [2024-07-25 09:45:07.456277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.456351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.022 [2024-07-25 09:45:07.456362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:07.022 [2024-07-25 09:45:07.456370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.022 [2024-07-25 09:45:07.456381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.456397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.022 [2024-07-25 09:45:07.456406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:07.022 [2024-07-25 09:45:07.456413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.022 [2024-07-25 09:45:07.456420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.022 [2024-07-25 09:45:07.583707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.022 [2024-07-25 09:45:07.583770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:07.022 [2024-07-25 09:45:07.583784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.022 [2024-07-25 09:45:07.583798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.282 [2024-07-25 09:45:07.695608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.282 [2024-07-25 09:45:07.695755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:07.282 [2024-07-25 09:45:07.695784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.282 [2024-07-25 09:45:07.695805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.282 [2024-07-25 09:45:07.695899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.282 [2024-07-25 09:45:07.695921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:07.282 [2024-07-25 09:45:07.695941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.283 [2024-07-25 09:45:07.695959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.283 [2024-07-25 09:45:07.696036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.283 [2024-07-25 09:45:07.696076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:07.283 [2024-07-25 09:45:07.696105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.283 [2024-07-25 09:45:07.696125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.283 [2024-07-25 09:45:07.696335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.283 [2024-07-25 09:45:07.696381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:07.283 [2024-07-25 09:45:07.696412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.283 [2024-07-25 09:45:07.696439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.283 [2024-07-25 09:45:07.696504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.283 [2024-07-25 09:45:07.696552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:07.283 [2024-07-25 09:45:07.696580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.283 [2024-07-25 09:45:07.696602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.283 [2024-07-25 09:45:07.696656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.283 [2024-07-25 09:45:07.696690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:07.283 [2024-07-25 09:45:07.696700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.283 [2024-07-25 09:45:07.696708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.283 [2024-07-25 09:45:07.696757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:07.283 [2024-07-25 09:45:07.696766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:07.283 [2024-07-25 09:45:07.696775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:07.283 [2024-07-25 09:45:07.696782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.283 [2024-07-25 09:45:07.696904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 670.939 ms, result 0 00:31:10.569 00:31:10.569 00:31:10.569 09:45:10 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:10.569 [2024-07-25 09:45:10.789554] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:10.569 [2024-07-25 09:45:10.789683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82410 ] 00:31:10.569 [2024-07-25 09:45:10.953930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.827 [2024-07-25 09:45:11.193331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.085 [2024-07-25 09:45:11.607308] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:11.085 [2024-07-25 09:45:11.607386] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:11.344 [2024-07-25 09:45:11.764544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.764611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:11.344 [2024-07-25 09:45:11.764627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:11.344 [2024-07-25 09:45:11.764636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.764700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.764712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:11.344 [2024-07-25 09:45:11.764721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:11.344 [2024-07-25 09:45:11.764732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.764755] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:11.344 [2024-07-25 09:45:11.766110] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:11.344 [2024-07-25 09:45:11.766150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.766160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:11.344 [2024-07-25 09:45:11.766170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:31:11.344 [2024-07-25 09:45:11.766179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.767700] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:11.344 [2024-07-25 09:45:11.790095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.790168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:11.344 [2024-07-25 09:45:11.790182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.435 ms 00:31:11.344 [2024-07-25 09:45:11.790208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.790346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.790362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:11.344 [2024-07-25 09:45:11.790372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:11.344 [2024-07-25 09:45:11.790380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.798153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.798200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:11.344 [2024-07-25 09:45:11.798212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.676 ms 00:31:11.344 [2024-07-25 09:45:11.798221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.798325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.798344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:11.344 [2024-07-25 09:45:11.798352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:11.344 [2024-07-25 09:45:11.798359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.798420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.344 [2024-07-25 09:45:11.798429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:11.344 [2024-07-25 09:45:11.798437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:11.344 [2024-07-25 09:45:11.798446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.344 [2024-07-25 09:45:11.798470] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:11.345 [2024-07-25 09:45:11.804344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.345 [2024-07-25 09:45:11.804394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:11.345 [2024-07-25 09:45:11.804406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.893 ms 00:31:11.345 [2024-07-25 09:45:11.804415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.345 [2024-07-25 09:45:11.804464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.345 [2024-07-25 09:45:11.804474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:11.345 [2024-07-25 09:45:11.804482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:31:11.345 [2024-07-25 09:45:11.804490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.345 [2024-07-25 09:45:11.804563] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:11.345 [2024-07-25 09:45:11.804587] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:11.345 [2024-07-25 09:45:11.804624] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:11.345 [2024-07-25 09:45:11.804644] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:31:11.345 [2024-07-25 09:45:11.804739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:11.345 [2024-07-25 09:45:11.804751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:11.345 [2024-07-25 09:45:11.804762] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:11.345 [2024-07-25 09:45:11.804774] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:11.345 [2024-07-25 09:45:11.804784] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:11.345 [2024-07-25 09:45:11.804793] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:11.345 [2024-07-25 09:45:11.804801] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:11.345 [2024-07-25 09:45:11.804809] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:11.345 [2024-07-25 09:45:11.804817] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:11.345 [2024-07-25 09:45:11.804826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.345 [2024-07-25 09:45:11.804836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:11.345 [2024-07-25 09:45:11.804845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:31:11.345 [2024-07-25 09:45:11.804853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.345 [2024-07-25 09:45:11.804931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.345 [2024-07-25 09:45:11.804940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:11.345 [2024-07-25 09:45:11.804949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:11.345 [2024-07-25 09:45:11.804956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.345 [2024-07-25 09:45:11.805048] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:11.345 [2024-07-25 09:45:11.805059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:11.345 [2024-07-25 09:45:11.805070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:11.345 [2024-07-25 09:45:11.805097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:11.345 [2024-07-25 09:45:11.805120] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805128] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:11.345 [2024-07-25 09:45:11.805136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:11.345 [2024-07-25 09:45:11.805144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:11.345 [2024-07-25 09:45:11.805152] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:11.345 [2024-07-25 09:45:11.805159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:11.345 [2024-07-25 09:45:11.805167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:11.345 [2024-07-25 09:45:11.805174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:11.345 [2024-07-25 09:45:11.805189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:11.345 [2024-07-25 09:45:11.805254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805263] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805271] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:11.345 [2024-07-25 09:45:11.805279] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805287] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:11.345 [2024-07-25 09:45:11.805301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805309] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:11.345 [2024-07-25 09:45:11.805324] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805332] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805340] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:11.345 [2024-07-25 09:45:11.805347] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:11.345 [2024-07-25 09:45:11.805362] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:11.345 [2024-07-25 09:45:11.805370] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:11.345 [2024-07-25 09:45:11.805378] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:11.345 [2024-07-25 09:45:11.805385] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:11.345 [2024-07-25 09:45:11.805392] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:11.345 [2024-07-25 09:45:11.805399] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:11.345 [2024-07-25 09:45:11.805415] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:11.345 [2024-07-25 09:45:11.805423] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805430] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:11.345 [2024-07-25 09:45:11.805438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:11.345 [2024-07-25 09:45:11.805446] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805454] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:11.345 [2024-07-25 09:45:11.805463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:11.345 [2024-07-25 09:45:11.805471] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:11.345 [2024-07-25 09:45:11.805479] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:11.345 [2024-07-25 09:45:11.805487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:11.345 [2024-07-25 09:45:11.805494] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:11.345 [2024-07-25 09:45:11.805520] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:11.345 [2024-07-25 09:45:11.805528] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:11.345 [2024-07-25 09:45:11.805538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:11.345 [2024-07-25 09:45:11.805547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:11.345 [2024-07-25 09:45:11.805554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:11.345 [2024-07-25 09:45:11.805563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:11.345 [2024-07-25 09:45:11.805570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:11.345 [2024-07-25 09:45:11.805578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:11.345 [2024-07-25 09:45:11.805585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:11.345 [2024-07-25 09:45:11.805593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:11.345 [2024-07-25 09:45:11.805600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:11.345 [2024-07-25 09:45:11.805607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:11.345 [2024-07-25 09:45:11.805613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:11.345 [2024-07-25 09:45:11.805620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:11.345 [2024-07-25 09:45:11.805628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:11.346 [2024-07-25 09:45:11.805636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:11.346 [2024-07-25 09:45:11.805643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:11.346 [2024-07-25 09:45:11.805650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:11.346 [2024-07-25 09:45:11.805658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:11.346 [2024-07-25 09:45:11.805670] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:11.346 [2024-07-25 09:45:11.805678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:11.346 [2024-07-25 09:45:11.805686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:11.346 [2024-07-25 09:45:11.805694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:11.346 [2024-07-25 09:45:11.805703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.805710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:11.346 [2024-07-25 09:45:11.805718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:31:11.346 [2024-07-25 09:45:11.805725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.860639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.860699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:11.346 [2024-07-25 09:45:11.860712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.964 ms 00:31:11.346 [2024-07-25 09:45:11.860737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.860847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.860858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:11.346 [2024-07-25 09:45:11.860867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:31:11.346 [2024-07-25 09:45:11.860875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.913757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.913813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:11.346 [2024-07-25 09:45:11.913827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.885 ms 00:31:11.346 [2024-07-25 09:45:11.913851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.913911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.913919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:11.346 [2024-07-25 09:45:11.913928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:11.346 [2024-07-25 09:45:11.913940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.914434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.914450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:11.346 [2024-07-25 09:45:11.914459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:31:11.346 [2024-07-25 09:45:11.914466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.914605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.914619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:11.346 [2024-07-25 09:45:11.914627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:31:11.346 [2024-07-25 09:45:11.914635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.346 [2024-07-25 09:45:11.935692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.346 [2024-07-25 09:45:11.935748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:11.346 [2024-07-25 09:45:11.935761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.070 ms 00:31:11.346 [2024-07-25 09:45:11.935773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:11.957232] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:11.605 [2024-07-25 09:45:11.957305] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:11.605 [2024-07-25 09:45:11.957322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:11.957331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:11.605 [2024-07-25 09:45:11.957343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.446 ms 00:31:11.605 [2024-07-25 09:45:11.957351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:11.990681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:11.990872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:11.605 [2024-07-25 09:45:11.990906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.307 ms 00:31:11.605 [2024-07-25 09:45:11.990927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.013032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.013186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:11.605 [2024-07-25 09:45:12.013220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.046 ms 00:31:11.605 [2024-07-25 09:45:12.013279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.034841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.034992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:11.605 [2024-07-25 09:45:12.035023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.511 ms 00:31:11.605 [2024-07-25 09:45:12.035044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.036059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.036134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:11.605 [2024-07-25 09:45:12.036168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:31:11.605 [2024-07-25 09:45:12.036189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.140373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.140532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:11.605 [2024-07-25 09:45:12.140571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.266 ms 00:31:11.605 [2024-07-25 09:45:12.140608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.156968] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:11.605 [2024-07-25 09:45:12.160599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.160730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:11.605 [2024-07-25 09:45:12.160763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.925 ms 00:31:11.605 [2024-07-25 09:45:12.160787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.160916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.160959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:11.605 [2024-07-25 09:45:12.160992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:11.605 [2024-07-25 09:45:12.161029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.162785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.162825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:11.605 [2024-07-25 09:45:12.162837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.690 ms 00:31:11.605 [2024-07-25 09:45:12.162847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.162890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.162900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:11.605 [2024-07-25 09:45:12.162909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:11.605 [2024-07-25 09:45:12.162917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.162952] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:11.605 [2024-07-25 09:45:12.162963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.162974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:11.605 [2024-07-25 09:45:12.162982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:11.605 [2024-07-25 09:45:12.162990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.207690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.207767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:11.605 [2024-07-25 09:45:12.207783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.762 ms 00:31:11.605 [2024-07-25 09:45:12.207801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.207934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:11.605 [2024-07-25 09:45:12.207944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:11.605 [2024-07-25 09:45:12.207953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:11.605 [2024-07-25 09:45:12.207961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:11.605 [2024-07-25 09:45:12.214627] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 449.826 ms, result 0 00:31:43.813  Copying: 30/1024 [MB] (30 MBps) Copying: 64/1024 [MB] (33 MBps) Copying: 96/1024 [MB] (32 MBps) Copying: 128/1024 [MB] (31 MBps) Copying: 159/1024 [MB] (31 MBps) Copying: 192/1024 [MB] (32 MBps) Copying: 224/1024 [MB] (32 MBps) Copying: 257/1024 [MB] (33 MBps) Copying: 290/1024 [MB] (32 MBps) Copying: 323/1024 [MB] (33 MBps) Copying: 355/1024 [MB] (32 MBps) Copying: 387/1024 [MB] (32 MBps) Copying: 419/1024 [MB] (32 MBps) Copying: 453/1024 [MB] (33 MBps) Copying: 487/1024 [MB] (34 MBps) Copying: 521/1024 [MB] (33 MBps) Copying: 553/1024 [MB] (32 MBps) Copying: 585/1024 [MB] (31 MBps) Copying: 618/1024 [MB] (32 MBps) Copying: 650/1024 [MB] (32 MBps) Copying: 682/1024 [MB] (32 MBps) Copying: 715/1024 [MB] (32 MBps) Copying: 748/1024 [MB] (32 MBps) Copying: 781/1024 [MB] (33 MBps) Copying: 813/1024 [MB] (31 MBps) Copying: 845/1024 [MB] (32 MBps) Copying: 876/1024 [MB] (31 MBps) Copying: 907/1024 [MB] (31 MBps) Copying: 940/1024 [MB] (32 MBps) Copying: 972/1024 [MB] (32 MBps) Copying: 1003/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 32 MBps)[2024-07-25 09:45:44.385321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.813 [2024-07-25 09:45:44.385749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:43.813 [2024-07-25 09:45:44.385802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:43.813 [2024-07-25 09:45:44.385844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.813 [2024-07-25 09:45:44.385930] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:43.813 [2024-07-25 09:45:44.390549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.813 [2024-07-25 09:45:44.390642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:43.813 [2024-07-25 09:45:44.390681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.886 ms 00:31:43.813 [2024-07-25 09:45:44.390708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.813 [2024-07-25 09:45:44.391032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.813 [2024-07-25 09:45:44.391055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:43.813 [2024-07-25 09:45:44.391067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:31:43.813 [2024-07-25 09:45:44.391078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.813 [2024-07-25 09:45:44.396216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.813 [2024-07-25 09:45:44.396316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:43.813 [2024-07-25 09:45:44.396370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.119 ms 00:31:43.813 [2024-07-25 09:45:44.396409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:43.813 [2024-07-25 09:45:44.402904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:43.813 [2024-07-25 09:45:44.402972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:43.813 [2024-07-25 09:45:44.402985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.435 ms 00:31:43.813 [2024-07-25 09:45:44.402993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.072 [2024-07-25 09:45:44.441178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.072 [2024-07-25 09:45:44.441214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:44.072 [2024-07-25 09:45:44.441225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.199 ms 00:31:44.072 [2024-07-25 09:45:44.441242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.072 [2024-07-25 09:45:44.461505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.072 [2024-07-25 09:45:44.461539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:44.072 [2024-07-25 09:45:44.461556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.265 ms 00:31:44.072 [2024-07-25 09:45:44.461564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.072 [2024-07-25 09:45:44.589105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.072 [2024-07-25 09:45:44.589175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:44.072 [2024-07-25 09:45:44.589191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 127.748 ms 00:31:44.072 [2024-07-25 09:45:44.589201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.072 [2024-07-25 09:45:44.628286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.072 [2024-07-25 09:45:44.628346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:31:44.072 [2024-07-25 09:45:44.628375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.141 ms 00:31:44.072 [2024-07-25 09:45:44.628383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.072 [2024-07-25 09:45:44.665758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.072 [2024-07-25 09:45:44.665794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:31:44.072 [2024-07-25 09:45:44.665804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.409 ms 00:31:44.072 [2024-07-25 09:45:44.665810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.333 [2024-07-25 09:45:44.702459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.333 [2024-07-25 09:45:44.702495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:44.333 [2024-07-25 09:45:44.702506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.684 ms 00:31:44.333 [2024-07-25 09:45:44.702527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.333 [2024-07-25 09:45:44.737575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.333 [2024-07-25 09:45:44.737622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:44.333 [2024-07-25 09:45:44.737633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.049 ms 00:31:44.333 [2024-07-25 09:45:44.737639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.333 [2024-07-25 09:45:44.737676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:44.333 [2024-07-25 09:45:44.737691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:31:44.333 [2024-07-25 09:45:44.737700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.737994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:44.333 [2024-07-25 09:45:44.738044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:44.334 [2024-07-25 09:45:44.738452] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:44.334 [2024-07-25 09:45:44.738459] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3be97080-f2b7-443f-968a-fc5f7b226e09 00:31:44.334 [2024-07-25 09:45:44.738466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:31:44.334 [2024-07-25 09:45:44.738473] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18112 00:31:44.334 [2024-07-25 09:45:44.738486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17152 00:31:44.334 [2024-07-25 09:45:44.738496] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0560 00:31:44.334 [2024-07-25 09:45:44.738503] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:44.334 [2024-07-25 09:45:44.738510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:44.334 [2024-07-25 09:45:44.738528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:44.334 [2024-07-25 09:45:44.738534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:44.334 [2024-07-25 09:45:44.738540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:44.334 [2024-07-25 09:45:44.738547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.334 [2024-07-25 09:45:44.738554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:44.334 [2024-07-25 09:45:44.738562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:31:44.334 [2024-07-25 09:45:44.738568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.758624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.334 [2024-07-25 09:45:44.758652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:44.334 [2024-07-25 09:45:44.758662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.067 ms 00:31:44.334 [2024-07-25 09:45:44.758696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.759188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:44.334 [2024-07-25 09:45:44.759201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:44.334 [2024-07-25 09:45:44.759208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:31:44.334 [2024-07-25 09:45:44.759215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.801751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.334 [2024-07-25 09:45:44.801784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:44.334 [2024-07-25 09:45:44.801797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.334 [2024-07-25 09:45:44.801804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.801853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.334 [2024-07-25 09:45:44.801861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:44.334 [2024-07-25 09:45:44.801868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.334 [2024-07-25 09:45:44.801875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.801950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.334 [2024-07-25 09:45:44.801961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:44.334 [2024-07-25 09:45:44.801969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.334 [2024-07-25 09:45:44.801979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.801994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.334 [2024-07-25 09:45:44.802001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:44.334 [2024-07-25 09:45:44.802008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.334 [2024-07-25 09:45:44.802014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.334 [2024-07-25 09:45:44.922434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.334 [2024-07-25 09:45:44.922484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:44.334 [2024-07-25 09:45:44.922496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.334 [2024-07-25 09:45:44.922524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.022539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.022589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:44.594 [2024-07-25 09:45:45.022602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.022625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.022710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.022720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:44.594 [2024-07-25 09:45:45.022727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.022734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.022771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.022779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:44.594 [2024-07-25 09:45:45.022786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.022793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.022887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.022898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:44.594 [2024-07-25 09:45:45.022905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.022912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.022941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.022953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:44.594 [2024-07-25 09:45:45.022960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.022967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.023000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.023008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:44.594 [2024-07-25 09:45:45.023014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.023021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.023066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:44.594 [2024-07-25 09:45:45.023074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:44.594 [2024-07-25 09:45:45.023082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:44.594 [2024-07-25 09:45:45.023088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:44.594 [2024-07-25 09:45:45.023190] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.084 ms, result 0 00:31:45.973 00:31:45.973 00:31:45.973 09:45:46 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:47.391 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:47.391 09:45:47 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:47.391 09:45:47 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:31:47.391 09:45:47 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:47.650 Process with pid 80972 is not found 00:31:47.650 Remove shared memory files 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 80972 00:31:47.650 09:45:48 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 80972 ']' 00:31:47.650 09:45:48 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 80972 00:31:47.650 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (80972) - No such process 00:31:47.650 09:45:48 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 80972 is not found' 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:47.650 09:45:48 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:31:47.650 ************************************ 00:31:47.650 END TEST ftl_restore 00:31:47.650 ************************************ 00:31:47.650 00:31:47.650 real 2m54.069s 00:31:47.650 user 2m42.643s 00:31:47.650 sys 0m12.801s 00:31:47.650 09:45:48 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:47.650 09:45:48 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:31:47.650 09:45:48 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:47.650 09:45:48 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:31:47.650 09:45:48 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:47.650 09:45:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:47.650 ************************************ 00:31:47.650 START TEST ftl_dirty_shutdown 00:31:47.650 ************************************ 00:31:47.650 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:31:47.910 * Looking for test storage... 00:31:47.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:47.910 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82844 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82844 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 82844 ']' 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:47.911 09:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:47.911 [2024-07-25 09:45:48.476287] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:47.911 [2024-07-25 09:45:48.476871] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82844 ] 00:31:48.170 [2024-07-25 09:45:48.638414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.429 [2024-07-25 09:45:48.877073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:31:49.364 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:49.624 09:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:49.624 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:49.624 { 00:31:49.624 "name": "nvme0n1", 00:31:49.624 "aliases": [ 00:31:49.624 "3468c9d4-11eb-4d02-b73a-d99dd5cd30b4" 00:31:49.624 ], 00:31:49.624 "product_name": "NVMe disk", 00:31:49.624 "block_size": 4096, 00:31:49.624 "num_blocks": 1310720, 00:31:49.624 "uuid": "3468c9d4-11eb-4d02-b73a-d99dd5cd30b4", 00:31:49.624 "assigned_rate_limits": { 00:31:49.624 "rw_ios_per_sec": 0, 00:31:49.624 "rw_mbytes_per_sec": 0, 00:31:49.624 "r_mbytes_per_sec": 0, 00:31:49.624 "w_mbytes_per_sec": 0 00:31:49.624 }, 00:31:49.624 "claimed": true, 00:31:49.624 "claim_type": "read_many_write_one", 00:31:49.624 "zoned": false, 00:31:49.624 "supported_io_types": { 00:31:49.624 "read": true, 00:31:49.624 "write": true, 00:31:49.624 "unmap": true, 00:31:49.624 "flush": true, 00:31:49.624 "reset": true, 00:31:49.624 "nvme_admin": true, 00:31:49.624 "nvme_io": true, 00:31:49.624 "nvme_io_md": false, 00:31:49.624 "write_zeroes": true, 00:31:49.624 "zcopy": false, 00:31:49.624 "get_zone_info": false, 00:31:49.624 "zone_management": false, 00:31:49.624 "zone_append": false, 00:31:49.624 "compare": true, 00:31:49.624 "compare_and_write": false, 00:31:49.624 "abort": true, 00:31:49.624 "seek_hole": false, 00:31:49.624 "seek_data": false, 00:31:49.624 "copy": true, 00:31:49.624 "nvme_iov_md": false 00:31:49.624 }, 00:31:49.624 "driver_specific": { 00:31:49.624 "nvme": [ 00:31:49.624 { 00:31:49.624 "pci_address": "0000:00:11.0", 00:31:49.624 "trid": { 00:31:49.624 "trtype": "PCIe", 00:31:49.624 "traddr": "0000:00:11.0" 00:31:49.624 }, 00:31:49.624 "ctrlr_data": { 00:31:49.624 "cntlid": 0, 00:31:49.624 "vendor_id": "0x1b36", 00:31:49.624 "model_number": "QEMU NVMe Ctrl", 00:31:49.624 "serial_number": "12341", 00:31:49.624 "firmware_revision": "8.0.0", 00:31:49.624 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:49.624 "oacs": { 00:31:49.624 "security": 0, 00:31:49.624 "format": 1, 00:31:49.624 "firmware": 0, 00:31:49.624 "ns_manage": 1 00:31:49.624 }, 00:31:49.624 "multi_ctrlr": false, 00:31:49.624 "ana_reporting": false 00:31:49.624 }, 00:31:49.624 "vs": { 00:31:49.624 "nvme_version": "1.4" 00:31:49.624 }, 00:31:49.624 "ns_data": { 00:31:49.624 "id": 1, 00:31:49.624 "can_share": false 00:31:49.624 } 00:31:49.624 } 00:31:49.624 ], 00:31:49.624 "mp_policy": "active_passive" 00:31:49.624 } 00:31:49.624 } 00:31:49.624 ]' 00:31:49.624 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:49.624 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:49.624 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=05c0b8bb-ae31-44a5-af6e-8f42c3f044a6 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:31:49.883 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05c0b8bb-ae31-44a5-af6e-8f42c3f044a6 00:31:50.141 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:50.398 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=711496aa-7c08-45f5-9e0b-f64ab7c683db 00:31:50.398 09:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 711496aa-7c08-45f5-9e0b-f64ab7c683db 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:50.655 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:50.655 { 00:31:50.655 "name": "1b50536c-5a2c-4fec-a035-442a3d28f04c", 00:31:50.655 "aliases": [ 00:31:50.655 "lvs/nvme0n1p0" 00:31:50.655 ], 00:31:50.655 "product_name": "Logical Volume", 00:31:50.655 "block_size": 4096, 00:31:50.655 "num_blocks": 26476544, 00:31:50.655 "uuid": "1b50536c-5a2c-4fec-a035-442a3d28f04c", 00:31:50.655 "assigned_rate_limits": { 00:31:50.655 "rw_ios_per_sec": 0, 00:31:50.655 "rw_mbytes_per_sec": 0, 00:31:50.655 "r_mbytes_per_sec": 0, 00:31:50.655 "w_mbytes_per_sec": 0 00:31:50.655 }, 00:31:50.655 "claimed": false, 00:31:50.655 "zoned": false, 00:31:50.655 "supported_io_types": { 00:31:50.655 "read": true, 00:31:50.655 "write": true, 00:31:50.655 "unmap": true, 00:31:50.655 "flush": false, 00:31:50.655 "reset": true, 00:31:50.655 "nvme_admin": false, 00:31:50.655 "nvme_io": false, 00:31:50.655 "nvme_io_md": false, 00:31:50.655 "write_zeroes": true, 00:31:50.655 "zcopy": false, 00:31:50.655 "get_zone_info": false, 00:31:50.655 "zone_management": false, 00:31:50.655 "zone_append": false, 00:31:50.655 "compare": false, 00:31:50.655 "compare_and_write": false, 00:31:50.655 "abort": false, 00:31:50.655 "seek_hole": true, 00:31:50.655 "seek_data": true, 00:31:50.655 "copy": false, 00:31:50.655 "nvme_iov_md": false 00:31:50.655 }, 00:31:50.655 "driver_specific": { 00:31:50.655 "lvol": { 00:31:50.655 "lvol_store_uuid": "711496aa-7c08-45f5-9e0b-f64ab7c683db", 00:31:50.655 "base_bdev": "nvme0n1", 00:31:50.655 "thin_provision": true, 00:31:50.655 "num_allocated_clusters": 0, 00:31:50.655 "snapshot": false, 00:31:50.655 "clone": false, 00:31:50.655 "esnap_clone": false 00:31:50.655 } 00:31:50.655 } 00:31:50.655 } 00:31:50.655 ]' 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:31:50.916 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:51.181 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:51.181 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:51.181 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:51.182 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:51.182 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:51.182 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:51.182 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:51.182 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:51.440 { 00:31:51.440 "name": "1b50536c-5a2c-4fec-a035-442a3d28f04c", 00:31:51.440 "aliases": [ 00:31:51.440 "lvs/nvme0n1p0" 00:31:51.440 ], 00:31:51.440 "product_name": "Logical Volume", 00:31:51.440 "block_size": 4096, 00:31:51.440 "num_blocks": 26476544, 00:31:51.440 "uuid": "1b50536c-5a2c-4fec-a035-442a3d28f04c", 00:31:51.440 "assigned_rate_limits": { 00:31:51.440 "rw_ios_per_sec": 0, 00:31:51.440 "rw_mbytes_per_sec": 0, 00:31:51.440 "r_mbytes_per_sec": 0, 00:31:51.440 "w_mbytes_per_sec": 0 00:31:51.440 }, 00:31:51.440 "claimed": false, 00:31:51.440 "zoned": false, 00:31:51.440 "supported_io_types": { 00:31:51.440 "read": true, 00:31:51.440 "write": true, 00:31:51.440 "unmap": true, 00:31:51.440 "flush": false, 00:31:51.440 "reset": true, 00:31:51.440 "nvme_admin": false, 00:31:51.440 "nvme_io": false, 00:31:51.440 "nvme_io_md": false, 00:31:51.440 "write_zeroes": true, 00:31:51.440 "zcopy": false, 00:31:51.440 "get_zone_info": false, 00:31:51.440 "zone_management": false, 00:31:51.440 "zone_append": false, 00:31:51.440 "compare": false, 00:31:51.440 "compare_and_write": false, 00:31:51.440 "abort": false, 00:31:51.440 "seek_hole": true, 00:31:51.440 "seek_data": true, 00:31:51.440 "copy": false, 00:31:51.440 "nvme_iov_md": false 00:31:51.440 }, 00:31:51.440 "driver_specific": { 00:31:51.440 "lvol": { 00:31:51.440 "lvol_store_uuid": "711496aa-7c08-45f5-9e0b-f64ab7c683db", 00:31:51.440 "base_bdev": "nvme0n1", 00:31:51.440 "thin_provision": true, 00:31:51.440 "num_allocated_clusters": 0, 00:31:51.440 "snapshot": false, 00:31:51.440 "clone": false, 00:31:51.440 "esnap_clone": false 00:31:51.440 } 00:31:51.440 } 00:31:51.440 } 00:31:51.440 ]' 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:31:51.440 09:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:51.698 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:31:51.698 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:51.698 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:51.698 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:31:51.698 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:31:51.699 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:31:51.699 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b50536c-5a2c-4fec-a035-442a3d28f04c 00:31:51.957 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:31:51.957 { 00:31:51.957 "name": "1b50536c-5a2c-4fec-a035-442a3d28f04c", 00:31:51.957 "aliases": [ 00:31:51.957 "lvs/nvme0n1p0" 00:31:51.957 ], 00:31:51.958 "product_name": "Logical Volume", 00:31:51.958 "block_size": 4096, 00:31:51.958 "num_blocks": 26476544, 00:31:51.958 "uuid": "1b50536c-5a2c-4fec-a035-442a3d28f04c", 00:31:51.958 "assigned_rate_limits": { 00:31:51.958 "rw_ios_per_sec": 0, 00:31:51.958 "rw_mbytes_per_sec": 0, 00:31:51.958 "r_mbytes_per_sec": 0, 00:31:51.958 "w_mbytes_per_sec": 0 00:31:51.958 }, 00:31:51.958 "claimed": false, 00:31:51.958 "zoned": false, 00:31:51.958 "supported_io_types": { 00:31:51.958 "read": true, 00:31:51.958 "write": true, 00:31:51.958 "unmap": true, 00:31:51.958 "flush": false, 00:31:51.958 "reset": true, 00:31:51.958 "nvme_admin": false, 00:31:51.958 "nvme_io": false, 00:31:51.958 "nvme_io_md": false, 00:31:51.958 "write_zeroes": true, 00:31:51.958 "zcopy": false, 00:31:51.958 "get_zone_info": false, 00:31:51.958 "zone_management": false, 00:31:51.958 "zone_append": false, 00:31:51.958 "compare": false, 00:31:51.958 "compare_and_write": false, 00:31:51.958 "abort": false, 00:31:51.958 "seek_hole": true, 00:31:51.958 "seek_data": true, 00:31:51.958 "copy": false, 00:31:51.958 "nvme_iov_md": false 00:31:51.958 }, 00:31:51.958 "driver_specific": { 00:31:51.958 "lvol": { 00:31:51.958 "lvol_store_uuid": "711496aa-7c08-45f5-9e0b-f64ab7c683db", 00:31:51.958 "base_bdev": "nvme0n1", 00:31:51.958 "thin_provision": true, 00:31:51.958 "num_allocated_clusters": 0, 00:31:51.958 "snapshot": false, 00:31:51.958 "clone": false, 00:31:51.958 "esnap_clone": false 00:31:51.958 } 00:31:51.958 } 00:31:51.958 } 00:31:51.958 ]' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 1b50536c-5a2c-4fec-a035-442a3d28f04c --l2p_dram_limit 10' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:51.958 09:45:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1b50536c-5a2c-4fec-a035-442a3d28f04c --l2p_dram_limit 10 -c nvc0n1p0 00:31:52.282 [2024-07-25 09:45:52.613080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.282 [2024-07-25 09:45:52.613145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:52.282 [2024-07-25 09:45:52.613161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:52.282 [2024-07-25 09:45:52.613172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.282 [2024-07-25 09:45:52.613254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.282 [2024-07-25 09:45:52.613268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:52.282 [2024-07-25 09:45:52.613277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:31:52.282 [2024-07-25 09:45:52.613287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.282 [2024-07-25 09:45:52.613309] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:52.282 [2024-07-25 09:45:52.614641] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:52.282 [2024-07-25 09:45:52.614669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.282 [2024-07-25 09:45:52.614683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:52.282 [2024-07-25 09:45:52.614691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:31:52.282 [2024-07-25 09:45:52.614710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.282 [2024-07-25 09:45:52.614779] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID dc496ea9-22e3-4957-af43-c6124a444212 00:31:52.282 [2024-07-25 09:45:52.616225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.282 [2024-07-25 09:45:52.616265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:52.282 [2024-07-25 09:45:52.616278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:31:52.282 [2024-07-25 09:45:52.616286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.282 [2024-07-25 09:45:52.624161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.282 [2024-07-25 09:45:52.624197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:52.282 [2024-07-25 09:45:52.624209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.841 ms 00:31:52.282 [2024-07-25 09:45:52.624232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.282 [2024-07-25 09:45:52.624358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.283 [2024-07-25 09:45:52.624371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:52.283 [2024-07-25 09:45:52.624399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:31:52.283 [2024-07-25 09:45:52.624408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.283 [2024-07-25 09:45:52.624487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.283 [2024-07-25 09:45:52.624498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:52.283 [2024-07-25 09:45:52.624511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:52.283 [2024-07-25 09:45:52.624519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.283 [2024-07-25 09:45:52.624548] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:52.283 [2024-07-25 09:45:52.630456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.283 [2024-07-25 09:45:52.630490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:52.283 [2024-07-25 09:45:52.630500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.930 ms 00:31:52.283 [2024-07-25 09:45:52.630509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.283 [2024-07-25 09:45:52.630543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.283 [2024-07-25 09:45:52.630685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:52.283 [2024-07-25 09:45:52.630694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:52.283 [2024-07-25 09:45:52.630712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.283 [2024-07-25 09:45:52.630755] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:52.283 [2024-07-25 09:45:52.630888] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:52.283 [2024-07-25 09:45:52.630900] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:52.283 [2024-07-25 09:45:52.630914] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:31:52.283 [2024-07-25 09:45:52.630924] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:52.283 [2024-07-25 09:45:52.630934] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:52.283 [2024-07-25 09:45:52.630942] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:52.283 [2024-07-25 09:45:52.630958] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:52.283 [2024-07-25 09:45:52.630965] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:52.283 [2024-07-25 09:45:52.630973] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:52.283 [2024-07-25 09:45:52.630980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.283 [2024-07-25 09:45:52.630989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:52.283 [2024-07-25 09:45:52.630997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:31:52.283 [2024-07-25 09:45:52.631005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.283 [2024-07-25 09:45:52.631073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.283 [2024-07-25 09:45:52.631083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:52.283 [2024-07-25 09:45:52.631090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:52.283 [2024-07-25 09:45:52.631101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.283 [2024-07-25 09:45:52.631180] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:52.283 [2024-07-25 09:45:52.631193] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:52.283 [2024-07-25 09:45:52.631213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:52.283 [2024-07-25 09:45:52.631255] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631261] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:52.283 [2024-07-25 09:45:52.631276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:52.283 [2024-07-25 09:45:52.631292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:52.283 [2024-07-25 09:45:52.631301] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:52.283 [2024-07-25 09:45:52.631307] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:52.283 [2024-07-25 09:45:52.631317] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:52.283 [2024-07-25 09:45:52.631323] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:52.283 [2024-07-25 09:45:52.631331] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:52.283 [2024-07-25 09:45:52.631348] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631354] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:52.283 [2024-07-25 09:45:52.631369] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631376] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:52.283 [2024-07-25 09:45:52.631391] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631397] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:52.283 [2024-07-25 09:45:52.631411] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:52.283 [2024-07-25 09:45:52.631433] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.283 [2024-07-25 09:45:52.631446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:52.283 [2024-07-25 09:45:52.631452] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631462] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:52.283 [2024-07-25 09:45:52.631468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:52.283 [2024-07-25 09:45:52.631477] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:52.283 [2024-07-25 09:45:52.631483] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:52.283 [2024-07-25 09:45:52.631491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:52.283 [2024-07-25 09:45:52.631497] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:52.283 [2024-07-25 09:45:52.631504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631510] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:52.283 [2024-07-25 09:45:52.631518] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:52.283 [2024-07-25 09:45:52.631524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.283 [2024-07-25 09:45:52.631531] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:52.283 [2024-07-25 09:45:52.631538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:52.284 [2024-07-25 09:45:52.631547] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:52.284 [2024-07-25 09:45:52.631554] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.284 [2024-07-25 09:45:52.631563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:52.284 [2024-07-25 09:45:52.631570] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:52.284 [2024-07-25 09:45:52.631579] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:52.284 [2024-07-25 09:45:52.631586] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:52.284 [2024-07-25 09:45:52.631593] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:52.284 [2024-07-25 09:45:52.631600] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:52.284 [2024-07-25 09:45:52.631612] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:52.284 [2024-07-25 09:45:52.631622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:52.284 [2024-07-25 09:45:52.631639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:52.284 [2024-07-25 09:45:52.631647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:52.284 [2024-07-25 09:45:52.631654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:52.284 [2024-07-25 09:45:52.631664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:52.284 [2024-07-25 09:45:52.631670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:52.284 [2024-07-25 09:45:52.631679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:52.284 [2024-07-25 09:45:52.631685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:52.284 [2024-07-25 09:45:52.631693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:52.284 [2024-07-25 09:45:52.631700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:52.284 [2024-07-25 09:45:52.631739] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:52.284 [2024-07-25 09:45:52.631746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:52.284 [2024-07-25 09:45:52.631763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:52.284 [2024-07-25 09:45:52.631771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:52.284 [2024-07-25 09:45:52.631779] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:52.284 [2024-07-25 09:45:52.631788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.284 [2024-07-25 09:45:52.631796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:52.284 [2024-07-25 09:45:52.631805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.657 ms 00:31:52.284 [2024-07-25 09:45:52.631812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.284 [2024-07-25 09:45:52.631866] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:52.284 [2024-07-25 09:45:52.631875] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:55.572 [2024-07-25 09:45:55.801580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.572 [2024-07-25 09:45:55.801650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:55.572 [2024-07-25 09:45:55.801666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3175.814 ms 00:31:55.572 [2024-07-25 09:45:55.801675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.572 [2024-07-25 09:45:55.845512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.572 [2024-07-25 09:45:55.845641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:55.572 [2024-07-25 09:45:55.845679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.602 ms 00:31:55.572 [2024-07-25 09:45:55.845701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.572 [2024-07-25 09:45:55.845878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.845916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:55.573 [2024-07-25 09:45:55.845960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:55.573 [2024-07-25 09:45:55.845981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:55.899306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.899435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:55.573 [2024-07-25 09:45:55.899470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.363 ms 00:31:55.573 [2024-07-25 09:45:55.899491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:55.899580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.899655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:55.573 [2024-07-25 09:45:55.899700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:55.573 [2024-07-25 09:45:55.899734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:55.900304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.900372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:55.573 [2024-07-25 09:45:55.900411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:31:55.573 [2024-07-25 09:45:55.900442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:55.900580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.900622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:55.573 [2024-07-25 09:45:55.900657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:31:55.573 [2024-07-25 09:45:55.900668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:55.923164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.923211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:55.573 [2024-07-25 09:45:55.923226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.508 ms 00:31:55.573 [2024-07-25 09:45:55.923247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:55.936918] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:55.573 [2024-07-25 09:45:55.940227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:55.940267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:55.573 [2024-07-25 09:45:55.940279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.876 ms 00:31:55.573 [2024-07-25 09:45:55.940304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:56.046955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:56.047026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:55.573 [2024-07-25 09:45:56.047041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 106.813 ms 00:31:55.573 [2024-07-25 09:45:56.047051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:56.047254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:56.047269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:55.573 [2024-07-25 09:45:56.047278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:31:55.573 [2024-07-25 09:45:56.047290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:56.088202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:56.088263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:55.573 [2024-07-25 09:45:56.088277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.937 ms 00:31:55.573 [2024-07-25 09:45:56.088291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:56.127018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:56.127066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:55.573 [2024-07-25 09:45:56.127080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.759 ms 00:31:55.573 [2024-07-25 09:45:56.127089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.573 [2024-07-25 09:45:56.127977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.573 [2024-07-25 09:45:56.128007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:55.573 [2024-07-25 09:45:56.128021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:31:55.573 [2024-07-25 09:45:56.128030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.241639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.832 [2024-07-25 09:45:56.241705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:55.832 [2024-07-25 09:45:56.241721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 113.770 ms 00:31:55.832 [2024-07-25 09:45:56.241734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.282128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.832 [2024-07-25 09:45:56.282181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:55.832 [2024-07-25 09:45:56.282195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.428 ms 00:31:55.832 [2024-07-25 09:45:56.282204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.320814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.832 [2024-07-25 09:45:56.320861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:55.832 [2024-07-25 09:45:56.320872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.629 ms 00:31:55.832 [2024-07-25 09:45:56.320897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.359583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.832 [2024-07-25 09:45:56.359626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:55.832 [2024-07-25 09:45:56.359638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.721 ms 00:31:55.832 [2024-07-25 09:45:56.359663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.359707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.832 [2024-07-25 09:45:56.359718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:55.832 [2024-07-25 09:45:56.359727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:55.832 [2024-07-25 09:45:56.359739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.359827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.832 [2024-07-25 09:45:56.359842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:55.832 [2024-07-25 09:45:56.359851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:55.832 [2024-07-25 09:45:56.359861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.832 [2024-07-25 09:45:56.360993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3754.639 ms, result 0 00:31:55.832 { 00:31:55.832 "name": "ftl0", 00:31:55.832 "uuid": "dc496ea9-22e3-4957-af43-c6124a444212" 00:31:55.832 } 00:31:55.832 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:31:55.832 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:56.092 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:31:56.092 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:31:56.092 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:31:56.351 /dev/nbd0 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:31:56.351 1+0 records in 00:31:56.351 1+0 records out 00:31:56.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372804 s, 11.0 MB/s 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:31:56.351 09:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:31:56.351 [2024-07-25 09:45:56.857666] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:56.351 [2024-07-25 09:45:56.857766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82988 ] 00:31:56.611 [2024-07-25 09:45:57.019439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.870 [2024-07-25 09:45:57.249711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:03.058  Copying: 231/1024 [MB] (231 MBps) Copying: 463/1024 [MB] (232 MBps) Copying: 694/1024 [MB] (230 MBps) Copying: 922/1024 [MB] (228 MBps) Copying: 1024/1024 [MB] (average 230 MBps) 00:32:03.058 00:32:03.058 09:46:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:04.974 09:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:32:04.974 [2024-07-25 09:46:05.239548] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:04.974 [2024-07-25 09:46:05.239663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83075 ] 00:32:04.974 [2024-07-25 09:46:05.405185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.232 [2024-07-25 09:46:05.637553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.017  Copying: 21/1024 [MB] (21 MBps) Copying: 43/1024 [MB] (22 MBps) Copying: 64/1024 [MB] (21 MBps) Copying: 83/1024 [MB] (19 MBps) Copying: 103/1024 [MB] (20 MBps) Copying: 124/1024 [MB] (20 MBps) Copying: 145/1024 [MB] (21 MBps) Copying: 166/1024 [MB] (21 MBps) Copying: 188/1024 [MB] (21 MBps) Copying: 207/1024 [MB] (19 MBps) Copying: 228/1024 [MB] (21 MBps) Copying: 249/1024 [MB] (21 MBps) Copying: 270/1024 [MB] (20 MBps) Copying: 291/1024 [MB] (20 MBps) Copying: 313/1024 [MB] (21 MBps) Copying: 334/1024 [MB] (21 MBps) Copying: 353/1024 [MB] (18 MBps) Copying: 374/1024 [MB] (21 MBps) Copying: 394/1024 [MB] (20 MBps) Copying: 414/1024 [MB] (19 MBps) Copying: 435/1024 [MB] (20 MBps) Copying: 456/1024 [MB] (21 MBps) Copying: 476/1024 [MB] (20 MBps) Copying: 498/1024 [MB] (21 MBps) Copying: 519/1024 [MB] (21 MBps) Copying: 540/1024 [MB] (20 MBps) Copying: 560/1024 [MB] (20 MBps) Copying: 581/1024 [MB] (20 MBps) Copying: 601/1024 [MB] (20 MBps) Copying: 622/1024 [MB] (20 MBps) Copying: 643/1024 [MB] (20 MBps) Copying: 664/1024 [MB] (21 MBps) Copying: 685/1024 [MB] (21 MBps) Copying: 706/1024 [MB] (21 MBps) Copying: 728/1024 [MB] (21 MBps) Copying: 749/1024 [MB] (21 MBps) Copying: 769/1024 [MB] (20 MBps) Copying: 791/1024 [MB] (21 MBps) Copying: 812/1024 [MB] (21 MBps) Copying: 833/1024 [MB] (21 MBps) Copying: 853/1024 [MB] (20 MBps) Copying: 875/1024 [MB] (21 MBps) Copying: 896/1024 [MB] (21 MBps) Copying: 917/1024 [MB] (21 MBps) Copying: 938/1024 [MB] (21 MBps) Copying: 959/1024 [MB] (20 MBps) Copying: 980/1024 [MB] (21 MBps) Copying: 1001/1024 [MB] (20 MBps) Copying: 1022/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 20 MBps) 00:32:56.017 00:32:56.017 09:46:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:32:56.017 09:46:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:32:56.017 09:46:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:32:56.275 [2024-07-25 09:46:56.763070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.763128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:56.275 [2024-07-25 09:46:56.763157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:56.275 [2024-07-25 09:46:56.763165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.763195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:56.275 [2024-07-25 09:46:56.767316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.767355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:56.275 [2024-07-25 09:46:56.767367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.114 ms 00:32:56.275 [2024-07-25 09:46:56.767379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.769518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.769568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:56.275 [2024-07-25 09:46:56.769580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:32:56.275 [2024-07-25 09:46:56.769596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.786706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.786753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:32:56.275 [2024-07-25 09:46:56.786765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.121 ms 00:32:56.275 [2024-07-25 09:46:56.786775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.792146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.792187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:32:56.275 [2024-07-25 09:46:56.792198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.345 ms 00:32:56.275 [2024-07-25 09:46:56.792207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.830579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.830624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:32:56.275 [2024-07-25 09:46:56.830635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.363 ms 00:32:56.275 [2024-07-25 09:46:56.830645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.855493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.855546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:32:56.275 [2024-07-25 09:46:56.855559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.855 ms 00:32:56.275 [2024-07-25 09:46:56.855569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.275 [2024-07-25 09:46:56.855733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.275 [2024-07-25 09:46:56.855747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:32:56.275 [2024-07-25 09:46:56.855755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:32:56.275 [2024-07-25 09:46:56.855764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.535 [2024-07-25 09:46:56.898653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.535 [2024-07-25 09:46:56.898700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:32:56.535 [2024-07-25 09:46:56.898713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.953 ms 00:32:56.535 [2024-07-25 09:46:56.898722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.535 [2024-07-25 09:46:56.939838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.535 [2024-07-25 09:46:56.939891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:32:56.535 [2024-07-25 09:46:56.939905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.149 ms 00:32:56.535 [2024-07-25 09:46:56.939915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.535 [2024-07-25 09:46:56.979993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.535 [2024-07-25 09:46:56.980043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:32:56.535 [2024-07-25 09:46:56.980055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.104 ms 00:32:56.535 [2024-07-25 09:46:56.980064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.535 [2024-07-25 09:46:57.021223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.535 [2024-07-25 09:46:57.021279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:32:56.535 [2024-07-25 09:46:57.021292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.136 ms 00:32:56.535 [2024-07-25 09:46:57.021303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.535 [2024-07-25 09:46:57.021352] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:56.535 [2024-07-25 09:46:57.021371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:56.535 [2024-07-25 09:46:57.021710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.021995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:56.536 [2024-07-25 09:46:57.022304] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:56.536 [2024-07-25 09:46:57.022312] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dc496ea9-22e3-4957-af43-c6124a444212 00:32:56.536 [2024-07-25 09:46:57.022333] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:56.536 [2024-07-25 09:46:57.022366] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:32:56.536 [2024-07-25 09:46:57.022378] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:56.536 [2024-07-25 09:46:57.022386] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:56.536 [2024-07-25 09:46:57.022395] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:56.536 [2024-07-25 09:46:57.022419] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:56.536 [2024-07-25 09:46:57.022429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:56.536 [2024-07-25 09:46:57.022437] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:56.536 [2024-07-25 09:46:57.022459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:56.536 [2024-07-25 09:46:57.022469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.536 [2024-07-25 09:46:57.022479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:56.536 [2024-07-25 09:46:57.022488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:32:56.536 [2024-07-25 09:46:57.022498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.536 [2024-07-25 09:46:57.045435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.536 [2024-07-25 09:46:57.045493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:56.536 [2024-07-25 09:46:57.045506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.916 ms 00:32:56.536 [2024-07-25 09:46:57.045518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.536 [2024-07-25 09:46:57.046125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:56.536 [2024-07-25 09:46:57.046148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:56.536 [2024-07-25 09:46:57.046159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:32:56.536 [2024-07-25 09:46:57.046169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.536 [2024-07-25 09:46:57.120556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.536 [2024-07-25 09:46:57.120621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:56.536 [2024-07-25 09:46:57.120634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.536 [2024-07-25 09:46:57.120645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.536 [2024-07-25 09:46:57.120729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.536 [2024-07-25 09:46:57.120741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:56.536 [2024-07-25 09:46:57.120750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.536 [2024-07-25 09:46:57.120759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.537 [2024-07-25 09:46:57.120867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.537 [2024-07-25 09:46:57.120884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:56.537 [2024-07-25 09:46:57.120893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.537 [2024-07-25 09:46:57.120903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.537 [2024-07-25 09:46:57.120924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.537 [2024-07-25 09:46:57.120938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:56.537 [2024-07-25 09:46:57.120946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.537 [2024-07-25 09:46:57.120956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.252809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.252878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:56.795 [2024-07-25 09:46:57.252892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.252903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:56.795 [2024-07-25 09:46:57.370177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:56.795 [2024-07-25 09:46:57.370355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:56.795 [2024-07-25 09:46:57.370448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:56.795 [2024-07-25 09:46:57.370602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:56.795 [2024-07-25 09:46:57.370683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:56.795 [2024-07-25 09:46:57.370783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.370842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:56.795 [2024-07-25 09:46:57.370858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:56.795 [2024-07-25 09:46:57.370867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:56.795 [2024-07-25 09:46:57.370878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:56.795 [2024-07-25 09:46:57.371026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 609.095 ms, result 0 00:32:56.795 true 00:32:56.795 09:46:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82844 00:32:56.795 09:46:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82844 00:32:57.054 09:46:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:32:57.054 [2024-07-25 09:46:57.496226] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:57.054 [2024-07-25 09:46:57.496355] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83605 ] 00:32:57.054 [2024-07-25 09:46:57.662565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:57.314 [2024-07-25 09:46:57.903779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.583  Copying: 214/1024 [MB] (214 MBps) Copying: 454/1024 [MB] (239 MBps) Copying: 699/1024 [MB] (245 MBps) Copying: 925/1024 [MB] (226 MBps) Copying: 1024/1024 [MB] (average 231 MBps) 00:33:03.583 00:33:03.583 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82844 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:33:03.583 09:47:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:03.583 [2024-07-25 09:47:04.188433] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:03.583 [2024-07-25 09:47:04.188566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83670 ] 00:33:03.843 [2024-07-25 09:47:04.353478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.103 [2024-07-25 09:47:04.595025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:04.670 [2024-07-25 09:47:05.006367] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:04.670 [2024-07-25 09:47:05.006435] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:04.670 [2024-07-25 09:47:05.072198] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:04.670 [2024-07-25 09:47:05.072503] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:04.670 [2024-07-25 09:47:05.072684] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:04.931 [2024-07-25 09:47:05.303882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.931 [2024-07-25 09:47:05.303943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:04.931 [2024-07-25 09:47:05.303957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:04.932 [2024-07-25 09:47:05.303965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.304018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.304030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:04.932 [2024-07-25 09:47:05.304038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:04.932 [2024-07-25 09:47:05.304045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.304063] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:04.932 [2024-07-25 09:47:05.305370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:04.932 [2024-07-25 09:47:05.305397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.305406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:04.932 [2024-07-25 09:47:05.305415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.340 ms 00:33:04.932 [2024-07-25 09:47:05.305423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.306959] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:04.932 [2024-07-25 09:47:05.331018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.331156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:04.932 [2024-07-25 09:47:05.331204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.104 ms 00:33:04.932 [2024-07-25 09:47:05.331235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.331374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.331420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:04.932 [2024-07-25 09:47:05.331445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:33:04.932 [2024-07-25 09:47:05.331509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.339287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.339398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:04.932 [2024-07-25 09:47:05.339430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.659 ms 00:33:04.932 [2024-07-25 09:47:05.339455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.339562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.339620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:04.932 [2024-07-25 09:47:05.339646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:04.932 [2024-07-25 09:47:05.339669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.339751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.339794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:04.932 [2024-07-25 09:47:05.339831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:04.932 [2024-07-25 09:47:05.339854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.339902] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:04.932 [2024-07-25 09:47:05.346731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.346810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:04.932 [2024-07-25 09:47:05.346863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:33:04.932 [2024-07-25 09:47:05.346888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.346969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.347005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:04.932 [2024-07-25 09:47:05.347045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:04.932 [2024-07-25 09:47:05.347079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.347152] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:04.932 [2024-07-25 09:47:05.347207] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:04.932 [2024-07-25 09:47:05.347310] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:04.932 [2024-07-25 09:47:05.347358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:33:04.932 [2024-07-25 09:47:05.347468] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:04.932 [2024-07-25 09:47:05.347480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:04.932 [2024-07-25 09:47:05.347492] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:04.932 [2024-07-25 09:47:05.347505] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:04.932 [2024-07-25 09:47:05.347516] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:04.932 [2024-07-25 09:47:05.347529] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:04.932 [2024-07-25 09:47:05.347538] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:04.932 [2024-07-25 09:47:05.347547] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:04.932 [2024-07-25 09:47:05.347555] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:04.932 [2024-07-25 09:47:05.347565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.347574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:04.932 [2024-07-25 09:47:05.347583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:33:04.932 [2024-07-25 09:47:05.347591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.347682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.932 [2024-07-25 09:47:05.347693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:04.932 [2024-07-25 09:47:05.347705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:33:04.932 [2024-07-25 09:47:05.347713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.932 [2024-07-25 09:47:05.347812] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:04.932 [2024-07-25 09:47:05.347824] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:04.932 [2024-07-25 09:47:05.347833] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:04.932 [2024-07-25 09:47:05.347842] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.932 [2024-07-25 09:47:05.347852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:04.932 [2024-07-25 09:47:05.347860] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:04.932 [2024-07-25 09:47:05.347868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:04.932 [2024-07-25 09:47:05.347875] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:04.932 [2024-07-25 09:47:05.347884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:04.932 [2024-07-25 09:47:05.347892] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:04.932 [2024-07-25 09:47:05.347900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:04.932 [2024-07-25 09:47:05.347908] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:04.932 [2024-07-25 09:47:05.347915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:04.932 [2024-07-25 09:47:05.347923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:04.932 [2024-07-25 09:47:05.347931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:04.932 [2024-07-25 09:47:05.347939] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.932 [2024-07-25 09:47:05.347964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:04.932 [2024-07-25 09:47:05.347975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:04.932 [2024-07-25 09:47:05.347984] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.932 [2024-07-25 09:47:05.347992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:04.932 [2024-07-25 09:47:05.348000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:04.932 [2024-07-25 09:47:05.348008] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.932 [2024-07-25 09:47:05.348015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:04.932 [2024-07-25 09:47:05.348023] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:04.932 [2024-07-25 09:47:05.348030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.932 [2024-07-25 09:47:05.348038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:04.932 [2024-07-25 09:47:05.348046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:04.932 [2024-07-25 09:47:05.348053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.932 [2024-07-25 09:47:05.348060] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:04.932 [2024-07-25 09:47:05.348069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:04.932 [2024-07-25 09:47:05.348076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:04.932 [2024-07-25 09:47:05.348084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:04.932 [2024-07-25 09:47:05.348091] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:04.932 [2024-07-25 09:47:05.348099] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:04.932 [2024-07-25 09:47:05.348107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:04.932 [2024-07-25 09:47:05.348114] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:04.932 [2024-07-25 09:47:05.348122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:04.932 [2024-07-25 09:47:05.348129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:04.932 [2024-07-25 09:47:05.348136] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:04.932 [2024-07-25 09:47:05.348144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.932 [2024-07-25 09:47:05.348151] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:04.932 [2024-07-25 09:47:05.348159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:04.932 [2024-07-25 09:47:05.348168] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.933 [2024-07-25 09:47:05.348176] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:04.933 [2024-07-25 09:47:05.348184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:04.933 [2024-07-25 09:47:05.348192] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:04.933 [2024-07-25 09:47:05.348200] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:04.933 [2024-07-25 09:47:05.348212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:04.933 [2024-07-25 09:47:05.348220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:04.933 [2024-07-25 09:47:05.348241] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:04.933 [2024-07-25 09:47:05.348250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:04.933 [2024-07-25 09:47:05.348258] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:04.933 [2024-07-25 09:47:05.348266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:04.933 [2024-07-25 09:47:05.348275] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:04.933 [2024-07-25 09:47:05.348286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:04.933 [2024-07-25 09:47:05.348304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:04.933 [2024-07-25 09:47:05.348312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:04.933 [2024-07-25 09:47:05.348321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:04.933 [2024-07-25 09:47:05.348330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:04.933 [2024-07-25 09:47:05.348338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:04.933 [2024-07-25 09:47:05.348346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:04.933 [2024-07-25 09:47:05.348355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:04.933 [2024-07-25 09:47:05.348363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:04.933 [2024-07-25 09:47:05.348371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:04.933 [2024-07-25 09:47:05.348412] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:04.933 [2024-07-25 09:47:05.348421] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:04.933 [2024-07-25 09:47:05.348438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:04.933 [2024-07-25 09:47:05.348446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:04.933 [2024-07-25 09:47:05.348457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:04.933 [2024-07-25 09:47:05.348468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.348478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:04.933 [2024-07-25 09:47:05.348487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:33:04.933 [2024-07-25 09:47:05.348495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.408785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.408829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:04.933 [2024-07-25 09:47:05.408842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.348 ms 00:33:04.933 [2024-07-25 09:47:05.408850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.408948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.408956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:04.933 [2024-07-25 09:47:05.408968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:04.933 [2024-07-25 09:47:05.408975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.463988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.464031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:04.933 [2024-07-25 09:47:05.464044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.026 ms 00:33:04.933 [2024-07-25 09:47:05.464052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.464111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.464120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:04.933 [2024-07-25 09:47:05.464128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:04.933 [2024-07-25 09:47:05.464136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.464651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.464687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:04.933 [2024-07-25 09:47:05.464698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:33:04.933 [2024-07-25 09:47:05.464706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.464843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.464857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:04.933 [2024-07-25 09:47:05.464867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:33:04.933 [2024-07-25 09:47:05.464875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.487853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.487894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:04.933 [2024-07-25 09:47:05.487906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.995 ms 00:33:04.933 [2024-07-25 09:47:05.487914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:04.933 [2024-07-25 09:47:05.510859] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:04.933 [2024-07-25 09:47:05.510905] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:04.933 [2024-07-25 09:47:05.510921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:04.933 [2024-07-25 09:47:05.510929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:04.933 [2024-07-25 09:47:05.510940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.915 ms 00:33:04.933 [2024-07-25 09:47:05.510947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.544481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.544536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:05.193 [2024-07-25 09:47:05.544550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.533 ms 00:33:05.193 [2024-07-25 09:47:05.544566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.565837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.565885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:05.193 [2024-07-25 09:47:05.565898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.211 ms 00:33:05.193 [2024-07-25 09:47:05.565905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.587654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.587701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:05.193 [2024-07-25 09:47:05.587715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.736 ms 00:33:05.193 [2024-07-25 09:47:05.587723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.588819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.588850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:05.193 [2024-07-25 09:47:05.588861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.970 ms 00:33:05.193 [2024-07-25 09:47:05.588870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.683060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.683125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:05.193 [2024-07-25 09:47:05.683138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.347 ms 00:33:05.193 [2024-07-25 09:47:05.683146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.696987] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:05.193 [2024-07-25 09:47:05.700340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.700374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:05.193 [2024-07-25 09:47:05.700385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.156 ms 00:33:05.193 [2024-07-25 09:47:05.700393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.700515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.700530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:05.193 [2024-07-25 09:47:05.700539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:33:05.193 [2024-07-25 09:47:05.700548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.700641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.700652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:05.193 [2024-07-25 09:47:05.700660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:33:05.193 [2024-07-25 09:47:05.700668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.700692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.700701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:05.193 [2024-07-25 09:47:05.700713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:05.193 [2024-07-25 09:47:05.700720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.700749] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:05.193 [2024-07-25 09:47:05.700760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.700769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:05.193 [2024-07-25 09:47:05.700778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:05.193 [2024-07-25 09:47:05.700786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.740758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.740809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:05.193 [2024-07-25 09:47:05.740820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.027 ms 00:33:05.193 [2024-07-25 09:47:05.740828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.740922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:05.193 [2024-07-25 09:47:05.740936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:05.193 [2024-07-25 09:47:05.740945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:33:05.193 [2024-07-25 09:47:05.740952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:05.193 [2024-07-25 09:47:05.742113] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 438.606 ms, result 0 00:33:39.467  Copying: 30/1024 [MB] (30 MBps) Copying: 60/1024 [MB] (29 MBps) Copying: 91/1024 [MB] (30 MBps) Copying: 121/1024 [MB] (30 MBps) Copying: 150/1024 [MB] (29 MBps) Copying: 180/1024 [MB] (29 MBps) Copying: 210/1024 [MB] (29 MBps) Copying: 239/1024 [MB] (29 MBps) Copying: 270/1024 [MB] (31 MBps) Copying: 302/1024 [MB] (31 MBps) Copying: 333/1024 [MB] (31 MBps) Copying: 365/1024 [MB] (31 MBps) Copying: 395/1024 [MB] (30 MBps) Copying: 426/1024 [MB] (30 MBps) Copying: 457/1024 [MB] (30 MBps) Copying: 488/1024 [MB] (30 MBps) Copying: 519/1024 [MB] (31 MBps) Copying: 551/1024 [MB] (31 MBps) Copying: 581/1024 [MB] (30 MBps) Copying: 612/1024 [MB] (31 MBps) Copying: 644/1024 [MB] (31 MBps) Copying: 674/1024 [MB] (30 MBps) Copying: 703/1024 [MB] (29 MBps) Copying: 733/1024 [MB] (29 MBps) Copying: 762/1024 [MB] (29 MBps) Copying: 793/1024 [MB] (30 MBps) Copying: 824/1024 [MB] (30 MBps) Copying: 855/1024 [MB] (31 MBps) Copying: 885/1024 [MB] (30 MBps) Copying: 917/1024 [MB] (31 MBps) Copying: 949/1024 [MB] (32 MBps) Copying: 983/1024 [MB] (33 MBps) Copying: 1015/1024 [MB] (31 MBps) Copying: 1048448/1048576 [kB] (9052 kBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-25 09:47:39.819736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.819805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:39.467 [2024-07-25 09:47:39.819823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:39.467 [2024-07-25 09:47:39.819833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:39.821391] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:39.467 [2024-07-25 09:47:39.829930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.829963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:39.467 [2024-07-25 09:47:39.829974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.515 ms 00:33:39.467 [2024-07-25 09:47:39.829998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:39.839289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.839350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:39.467 [2024-07-25 09:47:39.839379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.261 ms 00:33:39.467 [2024-07-25 09:47:39.839387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:39.862040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.862082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:33:39.467 [2024-07-25 09:47:39.862095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.675 ms 00:33:39.467 [2024-07-25 09:47:39.862104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:39.867874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.867906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:33:39.467 [2024-07-25 09:47:39.867938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.751 ms 00:33:39.467 [2024-07-25 09:47:39.867946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:39.907046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.907086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:33:39.467 [2024-07-25 09:47:39.907097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.130 ms 00:33:39.467 [2024-07-25 09:47:39.907104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:39.930402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:39.930441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:33:39.467 [2024-07-25 09:47:39.930454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.291 ms 00:33:39.467 [2024-07-25 09:47:39.930462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:40.014876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:40.014978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:33:39.467 [2024-07-25 09:47:40.014996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.512 ms 00:33:39.467 [2024-07-25 09:47:40.015019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.467 [2024-07-25 09:47:40.055737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.467 [2024-07-25 09:47:40.055786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:33:39.467 [2024-07-25 09:47:40.055799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.771 ms 00:33:39.467 [2024-07-25 09:47:40.055806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.728 [2024-07-25 09:47:40.098775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.728 [2024-07-25 09:47:40.098818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:33:39.728 [2024-07-25 09:47:40.098830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.998 ms 00:33:39.728 [2024-07-25 09:47:40.098837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.728 [2024-07-25 09:47:40.139948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.728 [2024-07-25 09:47:40.140001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:33:39.728 [2024-07-25 09:47:40.140015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.145 ms 00:33:39.728 [2024-07-25 09:47:40.140023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.728 [2024-07-25 09:47:40.178845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.728 [2024-07-25 09:47:40.178883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:33:39.728 [2024-07-25 09:47:40.178894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.799 ms 00:33:39.728 [2024-07-25 09:47:40.178902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.728 [2024-07-25 09:47:40.178940] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:39.728 [2024-07-25 09:47:40.178955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104704 / 261120 wr_cnt: 1 state: open 00:33:39.728 [2024-07-25 09:47:40.178965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.178973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.178981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.178990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.178998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:39.728 [2024-07-25 09:47:40.179382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:39.729 [2024-07-25 09:47:40.179791] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:39.729 [2024-07-25 09:47:40.179798] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dc496ea9-22e3-4957-af43-c6124a444212 00:33:39.729 [2024-07-25 09:47:40.179812] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104704 00:33:39.729 [2024-07-25 09:47:40.179819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105664 00:33:39.729 [2024-07-25 09:47:40.179828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104704 00:33:39.729 [2024-07-25 09:47:40.179836] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:33:39.729 [2024-07-25 09:47:40.179843] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:39.729 [2024-07-25 09:47:40.179850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:39.729 [2024-07-25 09:47:40.179858] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:39.729 [2024-07-25 09:47:40.179864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:39.729 [2024-07-25 09:47:40.179871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:39.729 [2024-07-25 09:47:40.179878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.729 [2024-07-25 09:47:40.179886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:39.729 [2024-07-25 09:47:40.179907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:33:39.729 [2024-07-25 09:47:40.179915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.729 [2024-07-25 09:47:40.201194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.729 [2024-07-25 09:47:40.201240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:39.729 [2024-07-25 09:47:40.201253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.283 ms 00:33:39.729 [2024-07-25 09:47:40.201261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.729 [2024-07-25 09:47:40.201871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:39.729 [2024-07-25 09:47:40.201889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:39.729 [2024-07-25 09:47:40.201899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:33:39.730 [2024-07-25 09:47:40.201920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.730 [2024-07-25 09:47:40.253124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.730 [2024-07-25 09:47:40.253194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:39.730 [2024-07-25 09:47:40.253208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.730 [2024-07-25 09:47:40.253217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.730 [2024-07-25 09:47:40.253297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.730 [2024-07-25 09:47:40.253307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:39.730 [2024-07-25 09:47:40.253316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.730 [2024-07-25 09:47:40.253324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.730 [2024-07-25 09:47:40.253404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.730 [2024-07-25 09:47:40.253417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:39.730 [2024-07-25 09:47:40.253426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.730 [2024-07-25 09:47:40.253434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.730 [2024-07-25 09:47:40.253451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.730 [2024-07-25 09:47:40.253460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:39.730 [2024-07-25 09:47:40.253468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.730 [2024-07-25 09:47:40.253475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.381676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.381741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:39.988 [2024-07-25 09:47:40.381755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.381765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:39.988 [2024-07-25 09:47:40.499366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:39.988 [2024-07-25 09:47:40.499489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:39.988 [2024-07-25 09:47:40.499561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:39.988 [2024-07-25 09:47:40.499700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:39.988 [2024-07-25 09:47:40.499763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:39.988 [2024-07-25 09:47:40.499832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.499886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:39.988 [2024-07-25 09:47:40.499896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:39.988 [2024-07-25 09:47:40.499903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:39.988 [2024-07-25 09:47:40.499911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:39.988 [2024-07-25 09:47:40.500029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 683.156 ms, result 0 00:33:42.520 00:33:42.520 00:33:42.520 09:47:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:33:44.423 09:47:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:33:44.423 [2024-07-25 09:47:44.805580] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:33:44.423 [2024-07-25 09:47:44.805715] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84078 ] 00:33:44.423 [2024-07-25 09:47:44.968643] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.682 [2024-07-25 09:47:45.220766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:45.250 [2024-07-25 09:47:45.678196] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:45.250 [2024-07-25 09:47:45.678276] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:45.250 [2024-07-25 09:47:45.836823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.250 [2024-07-25 09:47:45.836894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:45.250 [2024-07-25 09:47:45.836911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:45.250 [2024-07-25 09:47:45.836922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.250 [2024-07-25 09:47:45.836989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.250 [2024-07-25 09:47:45.837002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:45.250 [2024-07-25 09:47:45.837012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:33:45.250 [2024-07-25 09:47:45.837023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.250 [2024-07-25 09:47:45.837048] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:45.250 [2024-07-25 09:47:45.838389] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:45.250 [2024-07-25 09:47:45.838421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.250 [2024-07-25 09:47:45.838431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:45.250 [2024-07-25 09:47:45.838441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:33:45.250 [2024-07-25 09:47:45.838461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.250 [2024-07-25 09:47:45.839972] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:33:45.250 [2024-07-25 09:47:45.864053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.250 [2024-07-25 09:47:45.864120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:45.250 [2024-07-25 09:47:45.864135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.126 ms 00:33:45.250 [2024-07-25 09:47:45.864144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.250 [2024-07-25 09:47:45.864284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.250 [2024-07-25 09:47:45.864301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:45.510 [2024-07-25 09:47:45.864311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:45.510 [2024-07-25 09:47:45.864321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.872041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.872090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:45.510 [2024-07-25 09:47:45.872119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.639 ms 00:33:45.510 [2024-07-25 09:47:45.872128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.872229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.872259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:45.510 [2024-07-25 09:47:45.872270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:45.510 [2024-07-25 09:47:45.872290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.872353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.872364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:45.510 [2024-07-25 09:47:45.872373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:45.510 [2024-07-25 09:47:45.872380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.872408] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:45.510 [2024-07-25 09:47:45.878905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.878971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:45.510 [2024-07-25 09:47:45.878984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.518 ms 00:33:45.510 [2024-07-25 09:47:45.878992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.879039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.879049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:45.510 [2024-07-25 09:47:45.879058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:45.510 [2024-07-25 09:47:45.879066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.879132] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:45.510 [2024-07-25 09:47:45.879156] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:45.510 [2024-07-25 09:47:45.879194] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:45.510 [2024-07-25 09:47:45.879213] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:33:45.510 [2024-07-25 09:47:45.879339] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:45.510 [2024-07-25 09:47:45.879353] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:45.510 [2024-07-25 09:47:45.879367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:33:45.510 [2024-07-25 09:47:45.879379] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:45.510 [2024-07-25 09:47:45.879389] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:45.510 [2024-07-25 09:47:45.879398] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:45.510 [2024-07-25 09:47:45.879407] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:45.510 [2024-07-25 09:47:45.879416] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:45.510 [2024-07-25 09:47:45.879424] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:45.510 [2024-07-25 09:47:45.879434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.879445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:45.510 [2024-07-25 09:47:45.879454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:33:45.510 [2024-07-25 09:47:45.879464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.879549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.510 [2024-07-25 09:47:45.879559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:45.510 [2024-07-25 09:47:45.879570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:33:45.510 [2024-07-25 09:47:45.879578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.510 [2024-07-25 09:47:45.879677] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:45.510 [2024-07-25 09:47:45.879690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:45.511 [2024-07-25 09:47:45.879702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:45.511 [2024-07-25 09:47:45.879729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879737] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:45.511 [2024-07-25 09:47:45.879755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:45.511 [2024-07-25 09:47:45.879771] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:45.511 [2024-07-25 09:47:45.879779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:45.511 [2024-07-25 09:47:45.879787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:45.511 [2024-07-25 09:47:45.879796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:45.511 [2024-07-25 09:47:45.879805] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:45.511 [2024-07-25 09:47:45.879813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:45.511 [2024-07-25 09:47:45.879829] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879836] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:45.511 [2024-07-25 09:47:45.879868] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879876] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:45.511 [2024-07-25 09:47:45.879892] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:45.511 [2024-07-25 09:47:45.879916] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879924] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:45.511 [2024-07-25 09:47:45.879940] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:45.511 [2024-07-25 09:47:45.879955] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:45.511 [2024-07-25 09:47:45.879963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:45.511 [2024-07-25 09:47:45.879971] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:45.511 [2024-07-25 09:47:45.879979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:45.511 [2024-07-25 09:47:45.879987] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:45.511 [2024-07-25 09:47:45.879995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:45.511 [2024-07-25 09:47:45.880004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:45.511 [2024-07-25 09:47:45.880011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:45.511 [2024-07-25 09:47:45.880019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.880028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:45.511 [2024-07-25 09:47:45.880035] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:45.511 [2024-07-25 09:47:45.880043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.880050] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:45.511 [2024-07-25 09:47:45.880061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:45.511 [2024-07-25 09:47:45.880069] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:45.511 [2024-07-25 09:47:45.880078] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:45.511 [2024-07-25 09:47:45.880087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:45.511 [2024-07-25 09:47:45.880096] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:45.511 [2024-07-25 09:47:45.880103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:45.511 [2024-07-25 09:47:45.880112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:45.511 [2024-07-25 09:47:45.880119] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:45.511 [2024-07-25 09:47:45.880127] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:45.511 [2024-07-25 09:47:45.880137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:45.511 [2024-07-25 09:47:45.880148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:45.511 [2024-07-25 09:47:45.880159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:45.511 [2024-07-25 09:47:45.880167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:45.511 [2024-07-25 09:47:45.880176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:45.511 [2024-07-25 09:47:45.880184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:45.511 [2024-07-25 09:47:45.880193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:45.511 [2024-07-25 09:47:45.880201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:45.511 [2024-07-25 09:47:45.880210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:45.511 [2024-07-25 09:47:45.880219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:45.511 [2024-07-25 09:47:45.880227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:45.511 [2024-07-25 09:47:45.880236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:45.511 [2024-07-25 09:47:45.880255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:45.511 [2024-07-25 09:47:45.880265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:45.511 [2024-07-25 09:47:45.880273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:45.511 [2024-07-25 09:47:45.880281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:45.511 [2024-07-25 09:47:45.880289] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:45.512 [2024-07-25 09:47:45.880299] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:45.512 [2024-07-25 09:47:45.880312] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:45.512 [2024-07-25 09:47:45.880322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:45.512 [2024-07-25 09:47:45.880331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:45.512 [2024-07-25 09:47:45.880340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:45.512 [2024-07-25 09:47:45.880350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:45.880359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:45.512 [2024-07-25 09:47:45.880368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:33:45.512 [2024-07-25 09:47:45.880376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:45.942953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:45.943029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:45.512 [2024-07-25 09:47:45.943044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.640 ms 00:33:45.512 [2024-07-25 09:47:45.943053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:45.943172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:45.943182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:45.512 [2024-07-25 09:47:45.943190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:33:45.512 [2024-07-25 09:47:45.943197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.002759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.002819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:45.512 [2024-07-25 09:47:46.002834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.570 ms 00:33:45.512 [2024-07-25 09:47:46.002842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.002911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.002922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:45.512 [2024-07-25 09:47:46.002932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:45.512 [2024-07-25 09:47:46.002945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.003461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.003484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:45.512 [2024-07-25 09:47:46.003494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:33:45.512 [2024-07-25 09:47:46.003503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.003636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.003657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:45.512 [2024-07-25 09:47:46.003667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:33:45.512 [2024-07-25 09:47:46.003675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.027132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.027190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:45.512 [2024-07-25 09:47:46.027204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.473 ms 00:33:45.512 [2024-07-25 09:47:46.027233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.051141] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:33:45.512 [2024-07-25 09:47:46.051200] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:45.512 [2024-07-25 09:47:46.051217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.051226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:45.512 [2024-07-25 09:47:46.051246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.855 ms 00:33:45.512 [2024-07-25 09:47:46.051255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.085848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.085969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:45.512 [2024-07-25 09:47:46.085987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.590 ms 00:33:45.512 [2024-07-25 09:47:46.085996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.512 [2024-07-25 09:47:46.110693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.512 [2024-07-25 09:47:46.110771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:45.512 [2024-07-25 09:47:46.110787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.655 ms 00:33:45.512 [2024-07-25 09:47:46.110796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.133814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.133881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:45.772 [2024-07-25 09:47:46.133897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.980 ms 00:33:45.772 [2024-07-25 09:47:46.133905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.134836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.134873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:45.772 [2024-07-25 09:47:46.134885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:33:45.772 [2024-07-25 09:47:46.134893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.239422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.239496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:45.772 [2024-07-25 09:47:46.239513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.693 ms 00:33:45.772 [2024-07-25 09:47:46.239532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.255681] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:45.772 [2024-07-25 09:47:46.259216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.259267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:45.772 [2024-07-25 09:47:46.259280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.635 ms 00:33:45.772 [2024-07-25 09:47:46.259289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.259408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.259434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:45.772 [2024-07-25 09:47:46.259444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:45.772 [2024-07-25 09:47:46.259452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.261080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.261120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:45.772 [2024-07-25 09:47:46.261131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:33:45.772 [2024-07-25 09:47:46.261140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.261180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.261191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:45.772 [2024-07-25 09:47:46.261200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:45.772 [2024-07-25 09:47:46.261209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.261260] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:45.772 [2024-07-25 09:47:46.261272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.261285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:45.772 [2024-07-25 09:47:46.261295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:45.772 [2024-07-25 09:47:46.261306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.307534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.307601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:45.772 [2024-07-25 09:47:46.307618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.293 ms 00:33:45.772 [2024-07-25 09:47:46.307632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.307743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:45.772 [2024-07-25 09:47:46.307756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:45.772 [2024-07-25 09:47:46.307765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:33:45.772 [2024-07-25 09:47:46.307774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:45.772 [2024-07-25 09:47:46.314563] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 477.112 ms, result 0 00:34:15.271  Copying: 928/1048576 [kB] (928 kBps) Copying: 4284/1048576 [kB] (3356 kBps) Copying: 31/1024 [MB] (26 MBps) Copying: 68/1024 [MB] (37 MBps) Copying: 107/1024 [MB] (38 MBps) Copying: 145/1024 [MB] (37 MBps) Copying: 183/1024 [MB] (38 MBps) Copying: 221/1024 [MB] (38 MBps) Copying: 257/1024 [MB] (35 MBps) Copying: 296/1024 [MB] (39 MBps) Copying: 332/1024 [MB] (36 MBps) Copying: 369/1024 [MB] (36 MBps) Copying: 409/1024 [MB] (40 MBps) Copying: 447/1024 [MB] (38 MBps) Copying: 486/1024 [MB] (38 MBps) Copying: 525/1024 [MB] (38 MBps) Copying: 564/1024 [MB] (38 MBps) Copying: 602/1024 [MB] (38 MBps) Copying: 641/1024 [MB] (38 MBps) Copying: 679/1024 [MB] (38 MBps) Copying: 717/1024 [MB] (38 MBps) Copying: 756/1024 [MB] (38 MBps) Copying: 796/1024 [MB] (40 MBps) Copying: 837/1024 [MB] (40 MBps) Copying: 876/1024 [MB] (39 MBps) Copying: 916/1024 [MB] (40 MBps) Copying: 957/1024 [MB] (41 MBps) Copying: 998/1024 [MB] (40 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-07-25 09:48:15.628990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.271 [2024-07-25 09:48:15.629086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:15.271 [2024-07-25 09:48:15.629122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:15.271 [2024-07-25 09:48:15.629135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.271 [2024-07-25 09:48:15.629169] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:15.271 [2024-07-25 09:48:15.635161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.271 [2024-07-25 09:48:15.635239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:15.271 [2024-07-25 09:48:15.635254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.974 ms 00:34:15.271 [2024-07-25 09:48:15.635265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.271 [2024-07-25 09:48:15.635569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.271 [2024-07-25 09:48:15.635588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:15.271 [2024-07-25 09:48:15.635607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:34:15.271 [2024-07-25 09:48:15.635616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.271 [2024-07-25 09:48:15.648338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.271 [2024-07-25 09:48:15.648418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:15.271 [2024-07-25 09:48:15.648435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.720 ms 00:34:15.271 [2024-07-25 09:48:15.648445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.271 [2024-07-25 09:48:15.655006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.271 [2024-07-25 09:48:15.655078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:15.271 [2024-07-25 09:48:15.655093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.527 ms 00:34:15.271 [2024-07-25 09:48:15.655112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.271 [2024-07-25 09:48:15.703382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.271 [2024-07-25 09:48:15.703450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:15.271 [2024-07-25 09:48:15.703466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.264 ms 00:34:15.272 [2024-07-25 09:48:15.703475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.272 [2024-07-25 09:48:15.730279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.272 [2024-07-25 09:48:15.730375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:15.272 [2024-07-25 09:48:15.730391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.759 ms 00:34:15.272 [2024-07-25 09:48:15.730401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.272 [2024-07-25 09:48:15.734396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.272 [2024-07-25 09:48:15.734452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:15.272 [2024-07-25 09:48:15.734466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.915 ms 00:34:15.272 [2024-07-25 09:48:15.734476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.272 [2024-07-25 09:48:15.780496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.272 [2024-07-25 09:48:15.780592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:34:15.272 [2024-07-25 09:48:15.780614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.080 ms 00:34:15.272 [2024-07-25 09:48:15.780628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.272 [2024-07-25 09:48:15.827068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.272 [2024-07-25 09:48:15.827179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:34:15.272 [2024-07-25 09:48:15.827203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.406 ms 00:34:15.272 [2024-07-25 09:48:15.827216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.272 [2024-07-25 09:48:15.869415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.272 [2024-07-25 09:48:15.869505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:15.272 [2024-07-25 09:48:15.869529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.161 ms 00:34:15.272 [2024-07-25 09:48:15.869570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.532 [2024-07-25 09:48:15.912154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.532 [2024-07-25 09:48:15.912277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:15.532 [2024-07-25 09:48:15.912301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.473 ms 00:34:15.532 [2024-07-25 09:48:15.912316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.532 [2024-07-25 09:48:15.912433] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:15.532 [2024-07-25 09:48:15.912476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:15.532 [2024-07-25 09:48:15.912497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:34:15.532 [2024-07-25 09:48:15.912513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.912996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:15.532 [2024-07-25 09:48:15.913367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.913992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.914009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.914024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.914039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:15.533 [2024-07-25 09:48:15.914066] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:15.533 [2024-07-25 09:48:15.914095] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dc496ea9-22e3-4957-af43-c6124a444212 00:34:15.533 [2024-07-25 09:48:15.914112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:34:15.533 [2024-07-25 09:48:15.914137] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 162240 00:34:15.533 [2024-07-25 09:48:15.914151] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 160256 00:34:15.533 [2024-07-25 09:48:15.914167] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0124 00:34:15.533 [2024-07-25 09:48:15.914189] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:15.533 [2024-07-25 09:48:15.914204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:15.533 [2024-07-25 09:48:15.914217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:15.533 [2024-07-25 09:48:15.914247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:15.533 [2024-07-25 09:48:15.914262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:15.533 [2024-07-25 09:48:15.914279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.533 [2024-07-25 09:48:15.914295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:15.533 [2024-07-25 09:48:15.914311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:34:15.533 [2024-07-25 09:48:15.914325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:15.937611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.533 [2024-07-25 09:48:15.937680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:15.533 [2024-07-25 09:48:15.937707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.236 ms 00:34:15.533 [2024-07-25 09:48:15.937730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:15.938348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.533 [2024-07-25 09:48:15.938368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:15.533 [2024-07-25 09:48:15.938379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:34:15.533 [2024-07-25 09:48:15.938389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:15.993686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.533 [2024-07-25 09:48:15.993755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:15.533 [2024-07-25 09:48:15.993772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.533 [2024-07-25 09:48:15.993782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:15.993882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.533 [2024-07-25 09:48:15.993896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:15.533 [2024-07-25 09:48:15.993906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.533 [2024-07-25 09:48:15.993915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:15.994004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.533 [2024-07-25 09:48:15.994022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:15.533 [2024-07-25 09:48:15.994031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.533 [2024-07-25 09:48:15.994040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:15.994058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.533 [2024-07-25 09:48:15.994068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:15.533 [2024-07-25 09:48:15.994078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.533 [2024-07-25 09:48:15.994087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.533 [2024-07-25 09:48:16.139707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.533 [2024-07-25 09:48:16.139794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:15.533 [2024-07-25 09:48:16.139811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.533 [2024-07-25 09:48:16.139821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.792 [2024-07-25 09:48:16.265717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.265785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:15.793 [2024-07-25 09:48:16.265801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.265810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.265903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.265914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:15.793 [2024-07-25 09:48:16.265935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.265944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.265986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.265996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:15.793 [2024-07-25 09:48:16.266004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.266012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.266122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.266135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:15.793 [2024-07-25 09:48:16.266145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.266157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.266201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.266213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:15.793 [2024-07-25 09:48:16.266222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.266252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.266294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.266303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:15.793 [2024-07-25 09:48:16.266312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.266320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.266372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:15.793 [2024-07-25 09:48:16.266382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:15.793 [2024-07-25 09:48:16.266391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:15.793 [2024-07-25 09:48:16.266399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.793 [2024-07-25 09:48:16.266569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 638.785 ms, result 0 00:34:17.168 00:34:17.168 00:34:17.168 09:48:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:19.702 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:19.702 09:48:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:19.702 [2024-07-25 09:48:19.885415] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:19.702 [2024-07-25 09:48:19.885556] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84432 ] 00:34:19.702 [2024-07-25 09:48:20.055794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.961 [2024-07-25 09:48:20.339991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:20.219 [2024-07-25 09:48:20.820007] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:20.219 [2024-07-25 09:48:20.820097] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:20.480 [2024-07-25 09:48:20.981482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:20.981549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:20.480 [2024-07-25 09:48:20.981564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:20.480 [2024-07-25 09:48:20.981574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:20.981640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:20.981652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:20.480 [2024-07-25 09:48:20.981662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:34:20.480 [2024-07-25 09:48:20.981673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:20.981699] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:20.480 [2024-07-25 09:48:20.983132] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:20.480 [2024-07-25 09:48:20.983166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:20.983176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:20.480 [2024-07-25 09:48:20.983187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:34:20.480 [2024-07-25 09:48:20.983195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:20.984691] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:34:20.480 [2024-07-25 09:48:21.009337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.009407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:20.480 [2024-07-25 09:48:21.009423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.691 ms 00:34:20.480 [2024-07-25 09:48:21.009433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.009565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.009581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:20.480 [2024-07-25 09:48:21.009592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:34:20.480 [2024-07-25 09:48:21.009600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.017351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.017397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:20.480 [2024-07-25 09:48:21.017410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.650 ms 00:34:20.480 [2024-07-25 09:48:21.017419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.017518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.017536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:20.480 [2024-07-25 09:48:21.017546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:20.480 [2024-07-25 09:48:21.017554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.017618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.017628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:20.480 [2024-07-25 09:48:21.017638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:20.480 [2024-07-25 09:48:21.017646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.017672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:20.480 [2024-07-25 09:48:21.023771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.023815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:20.480 [2024-07-25 09:48:21.023827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:34:20.480 [2024-07-25 09:48:21.023835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.023886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.480 [2024-07-25 09:48:21.023897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:20.480 [2024-07-25 09:48:21.023906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:20.480 [2024-07-25 09:48:21.023914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.480 [2024-07-25 09:48:21.023986] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:20.480 [2024-07-25 09:48:21.024010] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:20.480 [2024-07-25 09:48:21.024071] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:20.480 [2024-07-25 09:48:21.024099] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:34:20.480 [2024-07-25 09:48:21.024192] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:20.481 [2024-07-25 09:48:21.024213] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:20.481 [2024-07-25 09:48:21.024242] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:34:20.481 [2024-07-25 09:48:21.024255] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024265] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024274] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:20.481 [2024-07-25 09:48:21.024282] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:20.481 [2024-07-25 09:48:21.024291] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:20.481 [2024-07-25 09:48:21.024299] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:20.481 [2024-07-25 09:48:21.024310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.481 [2024-07-25 09:48:21.024322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:20.481 [2024-07-25 09:48:21.024331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:34:20.481 [2024-07-25 09:48:21.024338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.481 [2024-07-25 09:48:21.024424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.481 [2024-07-25 09:48:21.024434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:20.481 [2024-07-25 09:48:21.024442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:34:20.481 [2024-07-25 09:48:21.024449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.481 [2024-07-25 09:48:21.024541] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:20.481 [2024-07-25 09:48:21.024553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:20.481 [2024-07-25 09:48:21.024565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024582] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:20.481 [2024-07-25 09:48:21.024590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024597] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024606] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:20.481 [2024-07-25 09:48:21.024613] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:20.481 [2024-07-25 09:48:21.024629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:20.481 [2024-07-25 09:48:21.024646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:20.481 [2024-07-25 09:48:21.024653] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:20.481 [2024-07-25 09:48:21.024661] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:20.481 [2024-07-25 09:48:21.024686] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:20.481 [2024-07-25 09:48:21.024693] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:20.481 [2024-07-25 09:48:21.024709] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024717] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:20.481 [2024-07-25 09:48:21.024746] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:20.481 [2024-07-25 09:48:21.024772] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024780] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:20.481 [2024-07-25 09:48:21.024796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:20.481 [2024-07-25 09:48:21.024819] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:20.481 [2024-07-25 09:48:21.024842] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:20.481 [2024-07-25 09:48:21.024858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:20.481 [2024-07-25 09:48:21.024866] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:20.481 [2024-07-25 09:48:21.024873] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:20.481 [2024-07-25 09:48:21.024883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:20.481 [2024-07-25 09:48:21.024891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:20.481 [2024-07-25 09:48:21.024899] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:20.481 [2024-07-25 09:48:21.024914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:20.481 [2024-07-25 09:48:21.024923] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024930] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:20.481 [2024-07-25 09:48:21.024939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:20.481 [2024-07-25 09:48:21.024948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:20.481 [2024-07-25 09:48:21.024956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:20.481 [2024-07-25 09:48:21.024964] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:20.481 [2024-07-25 09:48:21.024973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:20.481 [2024-07-25 09:48:21.024981] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:20.481 [2024-07-25 09:48:21.024989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:20.481 [2024-07-25 09:48:21.024996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:20.481 [2024-07-25 09:48:21.025004] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:20.481 [2024-07-25 09:48:21.025014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:20.481 [2024-07-25 09:48:21.025026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:20.481 [2024-07-25 09:48:21.025036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:20.481 [2024-07-25 09:48:21.025046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:20.481 [2024-07-25 09:48:21.025054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:20.481 [2024-07-25 09:48:21.025063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:20.481 [2024-07-25 09:48:21.025072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:20.481 [2024-07-25 09:48:21.025080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:20.481 [2024-07-25 09:48:21.025088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:20.481 [2024-07-25 09:48:21.025096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:20.481 [2024-07-25 09:48:21.025104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:20.481 [2024-07-25 09:48:21.025113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:20.481 [2024-07-25 09:48:21.025121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:20.481 [2024-07-25 09:48:21.025129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:20.481 [2024-07-25 09:48:21.025137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:20.482 [2024-07-25 09:48:21.025146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:20.482 [2024-07-25 09:48:21.025154] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:20.482 [2024-07-25 09:48:21.025164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:20.482 [2024-07-25 09:48:21.025177] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:20.482 [2024-07-25 09:48:21.025186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:20.482 [2024-07-25 09:48:21.025198] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:20.482 [2024-07-25 09:48:21.025207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:20.482 [2024-07-25 09:48:21.025217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.482 [2024-07-25 09:48:21.025227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:20.482 [2024-07-25 09:48:21.025235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:34:20.482 [2024-07-25 09:48:21.025254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.482 [2024-07-25 09:48:21.087134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.482 [2024-07-25 09:48:21.087191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:20.482 [2024-07-25 09:48:21.087224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.938 ms 00:34:20.482 [2024-07-25 09:48:21.087233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.482 [2024-07-25 09:48:21.087366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.482 [2024-07-25 09:48:21.087377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:20.482 [2024-07-25 09:48:21.087387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:34:20.482 [2024-07-25 09:48:21.087396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.147871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.147924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:20.741 [2024-07-25 09:48:21.147938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.488 ms 00:34:20.741 [2024-07-25 09:48:21.147947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.148011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.148021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:20.741 [2024-07-25 09:48:21.148031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:20.741 [2024-07-25 09:48:21.148044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.148555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.148573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:20.741 [2024-07-25 09:48:21.148584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:34:20.741 [2024-07-25 09:48:21.148593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.148734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.148754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:20.741 [2024-07-25 09:48:21.148764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:34:20.741 [2024-07-25 09:48:21.148773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.172525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.172576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:20.741 [2024-07-25 09:48:21.172590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.767 ms 00:34:20.741 [2024-07-25 09:48:21.172603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.197103] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:20.741 [2024-07-25 09:48:21.197165] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:20.741 [2024-07-25 09:48:21.197181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.197191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:20.741 [2024-07-25 09:48:21.197203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.461 ms 00:34:20.741 [2024-07-25 09:48:21.197212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.235612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.235730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:20.741 [2024-07-25 09:48:21.235764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.355 ms 00:34:20.741 [2024-07-25 09:48:21.235775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.260550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.260619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:20.741 [2024-07-25 09:48:21.260640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.720 ms 00:34:20.741 [2024-07-25 09:48:21.260649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.286179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.286244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:20.741 [2024-07-25 09:48:21.286261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.452 ms 00:34:20.741 [2024-07-25 09:48:21.286270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:20.741 [2024-07-25 09:48:21.287394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:20.741 [2024-07-25 09:48:21.287423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:20.741 [2024-07-25 09:48:21.287436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:34:20.741 [2024-07-25 09:48:21.287445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.395379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.395451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:21.000 [2024-07-25 09:48:21.395467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.111 ms 00:34:21.000 [2024-07-25 09:48:21.395487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.411891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:21.000 [2024-07-25 09:48:21.415530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.415574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:21.000 [2024-07-25 09:48:21.415587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.989 ms 00:34:21.000 [2024-07-25 09:48:21.415596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.415719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.415733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:21.000 [2024-07-25 09:48:21.415742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:34:21.000 [2024-07-25 09:48:21.415752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.416653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.416674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:21.000 [2024-07-25 09:48:21.416684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:34:21.000 [2024-07-25 09:48:21.416693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.416721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.416731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:21.000 [2024-07-25 09:48:21.416741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:21.000 [2024-07-25 09:48:21.416749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.416784] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:21.000 [2024-07-25 09:48:21.416795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.416809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:21.000 [2024-07-25 09:48:21.416818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:21.000 [2024-07-25 09:48:21.416826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.464371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.464441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:21.000 [2024-07-25 09:48:21.464473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.613 ms 00:34:21.000 [2024-07-25 09:48:21.464491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.464613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:21.000 [2024-07-25 09:48:21.464624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:21.000 [2024-07-25 09:48:21.464640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:21.000 [2024-07-25 09:48:21.464649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:21.000 [2024-07-25 09:48:21.465956] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 484.903 ms, result 0 00:34:50.442  Copying: 36/1024 [MB] (36 MBps) Copying: 72/1024 [MB] (36 MBps) Copying: 107/1024 [MB] (34 MBps) Copying: 142/1024 [MB] (35 MBps) Copying: 176/1024 [MB] (33 MBps) Copying: 211/1024 [MB] (35 MBps) Copying: 247/1024 [MB] (35 MBps) Copying: 283/1024 [MB] (36 MBps) Copying: 317/1024 [MB] (33 MBps) Copying: 353/1024 [MB] (36 MBps) Copying: 386/1024 [MB] (32 MBps) Copying: 421/1024 [MB] (35 MBps) Copying: 458/1024 [MB] (36 MBps) Copying: 493/1024 [MB] (35 MBps) Copying: 531/1024 [MB] (37 MBps) Copying: 567/1024 [MB] (35 MBps) Copying: 603/1024 [MB] (36 MBps) Copying: 638/1024 [MB] (35 MBps) Copying: 673/1024 [MB] (35 MBps) Copying: 709/1024 [MB] (35 MBps) Copying: 744/1024 [MB] (34 MBps) Copying: 780/1024 [MB] (35 MBps) Copying: 814/1024 [MB] (34 MBps) Copying: 850/1024 [MB] (35 MBps) Copying: 883/1024 [MB] (33 MBps) Copying: 913/1024 [MB] (30 MBps) Copying: 946/1024 [MB] (33 MBps) Copying: 981/1024 [MB] (35 MBps) Copying: 1017/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-07-25 09:48:50.905903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.442 [2024-07-25 09:48:50.905990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:50.442 [2024-07-25 09:48:50.906017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:50.442 [2024-07-25 09:48:50.906034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.442 [2024-07-25 09:48:50.906072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:50.442 [2024-07-25 09:48:50.916377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.442 [2024-07-25 09:48:50.916452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:50.442 [2024-07-25 09:48:50.916478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.290 ms 00:34:50.443 [2024-07-25 09:48:50.916507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.443 [2024-07-25 09:48:50.917011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.443 [2024-07-25 09:48:50.917046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:50.443 [2024-07-25 09:48:50.917066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:34:50.443 [2024-07-25 09:48:50.917082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.443 [2024-07-25 09:48:50.922278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.443 [2024-07-25 09:48:50.922311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:34:50.443 [2024-07-25 09:48:50.922324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:34:50.443 [2024-07-25 09:48:50.922336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.443 [2024-07-25 09:48:50.930394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.443 [2024-07-25 09:48:50.930437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:34:50.443 [2024-07-25 09:48:50.930449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.041 ms 00:34:50.443 [2024-07-25 09:48:50.930458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.443 [2024-07-25 09:48:50.978393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.443 [2024-07-25 09:48:50.978469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:34:50.443 [2024-07-25 09:48:50.978484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.924 ms 00:34:50.443 [2024-07-25 09:48:50.978493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.443 [2024-07-25 09:48:51.005818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.443 [2024-07-25 09:48:51.005890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:34:50.443 [2024-07-25 09:48:51.005907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.290 ms 00:34:50.443 [2024-07-25 09:48:51.005917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.443 [2024-07-25 09:48:51.009466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.443 [2024-07-25 09:48:51.009526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:34:50.443 [2024-07-25 09:48:51.009549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:34:50.443 [2024-07-25 09:48:51.009558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.702 [2024-07-25 09:48:51.059543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.702 [2024-07-25 09:48:51.059618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:34:50.702 [2024-07-25 09:48:51.059633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.057 ms 00:34:50.702 [2024-07-25 09:48:51.059642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.702 [2024-07-25 09:48:51.108706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.702 [2024-07-25 09:48:51.108774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:34:50.702 [2024-07-25 09:48:51.108788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.073 ms 00:34:50.702 [2024-07-25 09:48:51.108798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.702 [2024-07-25 09:48:51.157656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.702 [2024-07-25 09:48:51.157728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:34:50.702 [2024-07-25 09:48:51.157768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.848 ms 00:34:50.702 [2024-07-25 09:48:51.157778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.702 [2024-07-25 09:48:51.204645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.702 [2024-07-25 09:48:51.204719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:34:50.702 [2024-07-25 09:48:51.204734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.808 ms 00:34:50.702 [2024-07-25 09:48:51.204742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.702 [2024-07-25 09:48:51.204825] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:50.702 [2024-07-25 09:48:51.204843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:34:50.702 [2024-07-25 09:48:51.204855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:34:50.702 [2024-07-25 09:48:51.204864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.204999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:50.702 [2024-07-25 09:48:51.205296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:50.703 [2024-07-25 09:48:51.205734] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:50.703 [2024-07-25 09:48:51.205743] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: dc496ea9-22e3-4957-af43-c6124a444212 00:34:50.703 [2024-07-25 09:48:51.205755] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:34:50.703 [2024-07-25 09:48:51.205764] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:34:50.703 [2024-07-25 09:48:51.205771] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:50.703 [2024-07-25 09:48:51.205780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:50.703 [2024-07-25 09:48:51.205788] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:50.703 [2024-07-25 09:48:51.205797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:50.703 [2024-07-25 09:48:51.205805] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:50.703 [2024-07-25 09:48:51.205812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:50.703 [2024-07-25 09:48:51.205820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:50.703 [2024-07-25 09:48:51.205828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.703 [2024-07-25 09:48:51.205837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:50.703 [2024-07-25 09:48:51.205849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:34:50.703 [2024-07-25 09:48:51.205857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.703 [2024-07-25 09:48:51.229463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.703 [2024-07-25 09:48:51.229528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:50.703 [2024-07-25 09:48:51.229562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.590 ms 00:34:50.703 [2024-07-25 09:48:51.229572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.703 [2024-07-25 09:48:51.230164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:50.703 [2024-07-25 09:48:51.230182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:50.703 [2024-07-25 09:48:51.230192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:34:50.703 [2024-07-25 09:48:51.230206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.703 [2024-07-25 09:48:51.280012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.703 [2024-07-25 09:48:51.280070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:50.703 [2024-07-25 09:48:51.280083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.703 [2024-07-25 09:48:51.280092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.703 [2024-07-25 09:48:51.280184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.703 [2024-07-25 09:48:51.280194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:50.703 [2024-07-25 09:48:51.280203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.703 [2024-07-25 09:48:51.280218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.703 [2024-07-25 09:48:51.280302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.703 [2024-07-25 09:48:51.280315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:50.703 [2024-07-25 09:48:51.280324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.703 [2024-07-25 09:48:51.280333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.703 [2024-07-25 09:48:51.280352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.703 [2024-07-25 09:48:51.280361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:50.703 [2024-07-25 09:48:51.280369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.703 [2024-07-25 09:48:51.280377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.408649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.408735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:50.963 [2024-07-25 09:48:51.408750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.408759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:50.963 [2024-07-25 09:48:51.528308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:50.963 [2024-07-25 09:48:51.528440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:50.963 [2024-07-25 09:48:51.528509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:50.963 [2024-07-25 09:48:51.528693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:50.963 [2024-07-25 09:48:51.528762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:50.963 [2024-07-25 09:48:51.528836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.528890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:50.963 [2024-07-25 09:48:51.528900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:50.963 [2024-07-25 09:48:51.528909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:50.963 [2024-07-25 09:48:51.528918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:50.963 [2024-07-25 09:48:51.529045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 624.326 ms, result 0 00:34:52.341 00:34:52.341 00:34:52.341 09:48:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:54.242 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:34:54.242 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:34:54.242 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:34:54.242 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:34:54.242 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:34:54.501 Process with pid 82844 is not found 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82844 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 82844 ']' 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 82844 00:34:54.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (82844) - No such process 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 82844 is not found' 00:34:54.501 09:48:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:34:54.760 Remove shared memory files 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:34:54.760 00:34:54.760 real 3m7.073s 00:34:54.760 user 3m35.666s 00:34:54.760 sys 0m27.913s 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:34:54.760 09:48:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:54.760 ************************************ 00:34:54.760 END TEST ftl_dirty_shutdown 00:34:54.760 ************************************ 00:34:54.760 09:48:55 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:54.760 09:48:55 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:34:54.760 09:48:55 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:34:54.760 09:48:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:34:54.760 ************************************ 00:34:54.760 START TEST ftl_upgrade_shutdown 00:34:54.760 ************************************ 00:34:54.760 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:34:55.019 * Looking for test storage... 00:34:55.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:34:55.019 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84847 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84847 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84847 ']' 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:34:55.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:34:55.020 09:48:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:34:55.020 [2024-07-25 09:48:55.599525] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:34:55.020 [2024-07-25 09:48:55.599659] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84847 ] 00:34:55.278 [2024-07-25 09:48:55.764635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.536 [2024-07-25 09:48:56.020806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:34:56.473 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:34:57.040 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:57.040 { 00:34:57.040 "name": "basen1", 00:34:57.040 "aliases": [ 00:34:57.040 "f37a8020-d19b-4b42-8c47-4991cb83f213" 00:34:57.040 ], 00:34:57.040 "product_name": "NVMe disk", 00:34:57.040 "block_size": 4096, 00:34:57.040 "num_blocks": 1310720, 00:34:57.040 "uuid": "f37a8020-d19b-4b42-8c47-4991cb83f213", 00:34:57.040 "assigned_rate_limits": { 00:34:57.040 "rw_ios_per_sec": 0, 00:34:57.040 "rw_mbytes_per_sec": 0, 00:34:57.040 "r_mbytes_per_sec": 0, 00:34:57.040 "w_mbytes_per_sec": 0 00:34:57.040 }, 00:34:57.040 "claimed": true, 00:34:57.040 "claim_type": "read_many_write_one", 00:34:57.040 "zoned": false, 00:34:57.040 "supported_io_types": { 00:34:57.040 "read": true, 00:34:57.040 "write": true, 00:34:57.040 "unmap": true, 00:34:57.040 "flush": true, 00:34:57.040 "reset": true, 00:34:57.040 "nvme_admin": true, 00:34:57.040 "nvme_io": true, 00:34:57.040 "nvme_io_md": false, 00:34:57.040 "write_zeroes": true, 00:34:57.040 "zcopy": false, 00:34:57.040 "get_zone_info": false, 00:34:57.040 "zone_management": false, 00:34:57.040 "zone_append": false, 00:34:57.040 "compare": true, 00:34:57.040 "compare_and_write": false, 00:34:57.040 "abort": true, 00:34:57.040 "seek_hole": false, 00:34:57.040 "seek_data": false, 00:34:57.040 "copy": true, 00:34:57.040 "nvme_iov_md": false 00:34:57.040 }, 00:34:57.040 "driver_specific": { 00:34:57.040 "nvme": [ 00:34:57.040 { 00:34:57.040 "pci_address": "0000:00:11.0", 00:34:57.040 "trid": { 00:34:57.040 "trtype": "PCIe", 00:34:57.040 "traddr": "0000:00:11.0" 00:34:57.040 }, 00:34:57.040 "ctrlr_data": { 00:34:57.040 "cntlid": 0, 00:34:57.040 "vendor_id": "0x1b36", 00:34:57.040 "model_number": "QEMU NVMe Ctrl", 00:34:57.040 "serial_number": "12341", 00:34:57.040 "firmware_revision": "8.0.0", 00:34:57.040 "subnqn": "nqn.2019-08.org.qemu:12341", 00:34:57.040 "oacs": { 00:34:57.040 "security": 0, 00:34:57.040 "format": 1, 00:34:57.040 "firmware": 0, 00:34:57.040 "ns_manage": 1 00:34:57.040 }, 00:34:57.040 "multi_ctrlr": false, 00:34:57.040 "ana_reporting": false 00:34:57.040 }, 00:34:57.040 "vs": { 00:34:57.040 "nvme_version": "1.4" 00:34:57.040 }, 00:34:57.040 "ns_data": { 00:34:57.041 "id": 1, 00:34:57.041 "can_share": false 00:34:57.041 } 00:34:57.041 } 00:34:57.041 ], 00:34:57.041 "mp_policy": "active_passive" 00:34:57.041 } 00:34:57.041 } 00:34:57.041 ]' 00:34:57.041 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:57.041 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:57.041 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=711496aa-7c08-45f5-9e0b-f64ab7c683db 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:34:57.300 09:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 711496aa-7c08-45f5-9e0b-f64ab7c683db 00:34:57.560 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:34:57.819 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=76786d88-593c-4a5e-99c9-c9db9c43e5ac 00:34:57.819 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 76786d88-593c-4a5e-99c9-c9db9c43e5ac 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=bdecbbef-7361-441f-9199-5389fd91fe6b 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z bdecbbef-7361-441f-9199-5389fd91fe6b ]] 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 bdecbbef-7361-441f-9199-5389fd91fe6b 5120 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=bdecbbef-7361-441f-9199-5389fd91fe6b 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size bdecbbef-7361-441f-9199-5389fd91fe6b 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=bdecbbef-7361-441f-9199-5389fd91fe6b 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:34:58.078 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bdecbbef-7361-441f-9199-5389fd91fe6b 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:34:58.338 { 00:34:58.338 "name": "bdecbbef-7361-441f-9199-5389fd91fe6b", 00:34:58.338 "aliases": [ 00:34:58.338 "lvs/basen1p0" 00:34:58.338 ], 00:34:58.338 "product_name": "Logical Volume", 00:34:58.338 "block_size": 4096, 00:34:58.338 "num_blocks": 5242880, 00:34:58.338 "uuid": "bdecbbef-7361-441f-9199-5389fd91fe6b", 00:34:58.338 "assigned_rate_limits": { 00:34:58.338 "rw_ios_per_sec": 0, 00:34:58.338 "rw_mbytes_per_sec": 0, 00:34:58.338 "r_mbytes_per_sec": 0, 00:34:58.338 "w_mbytes_per_sec": 0 00:34:58.338 }, 00:34:58.338 "claimed": false, 00:34:58.338 "zoned": false, 00:34:58.338 "supported_io_types": { 00:34:58.338 "read": true, 00:34:58.338 "write": true, 00:34:58.338 "unmap": true, 00:34:58.338 "flush": false, 00:34:58.338 "reset": true, 00:34:58.338 "nvme_admin": false, 00:34:58.338 "nvme_io": false, 00:34:58.338 "nvme_io_md": false, 00:34:58.338 "write_zeroes": true, 00:34:58.338 "zcopy": false, 00:34:58.338 "get_zone_info": false, 00:34:58.338 "zone_management": false, 00:34:58.338 "zone_append": false, 00:34:58.338 "compare": false, 00:34:58.338 "compare_and_write": false, 00:34:58.338 "abort": false, 00:34:58.338 "seek_hole": true, 00:34:58.338 "seek_data": true, 00:34:58.338 "copy": false, 00:34:58.338 "nvme_iov_md": false 00:34:58.338 }, 00:34:58.338 "driver_specific": { 00:34:58.338 "lvol": { 00:34:58.338 "lvol_store_uuid": "76786d88-593c-4a5e-99c9-c9db9c43e5ac", 00:34:58.338 "base_bdev": "basen1", 00:34:58.338 "thin_provision": true, 00:34:58.338 "num_allocated_clusters": 0, 00:34:58.338 "snapshot": false, 00:34:58.338 "clone": false, 00:34:58.338 "esnap_clone": false 00:34:58.338 } 00:34:58.338 } 00:34:58.338 } 00:34:58.338 ]' 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:34:58.338 09:48:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:34:58.597 09:48:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:34:58.597 09:48:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:34:58.597 09:48:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:34:58.857 09:48:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:34:58.857 09:48:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:34:58.857 09:48:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d bdecbbef-7361-441f-9199-5389fd91fe6b -c cachen1p0 --l2p_dram_limit 2 00:34:59.117 [2024-07-25 09:48:59.533384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.117 [2024-07-25 09:48:59.533442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:34:59.117 [2024-07-25 09:48:59.533459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:34:59.117 [2024-07-25 09:48:59.533471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.117 [2024-07-25 09:48:59.533542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.117 [2024-07-25 09:48:59.533556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:34:59.117 [2024-07-25 09:48:59.533566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:34:59.117 [2024-07-25 09:48:59.533577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.117 [2024-07-25 09:48:59.533600] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:34:59.117 [2024-07-25 09:48:59.535004] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:34:59.117 [2024-07-25 09:48:59.535028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.117 [2024-07-25 09:48:59.535041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:34:59.117 [2024-07-25 09:48:59.535051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.436 ms 00:34:59.117 [2024-07-25 09:48:59.535062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.117 [2024-07-25 09:48:59.535134] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2df06f8c-c9ac-42e1-b7ad-26fffdfd390a 00:34:59.117 [2024-07-25 09:48:59.536668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.117 [2024-07-25 09:48:59.536715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:34:59.117 [2024-07-25 09:48:59.536730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:34:59.118 [2024-07-25 09:48:59.536739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.544493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.544532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:34:59.118 [2024-07-25 09:48:59.544547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.709 ms 00:34:59.118 [2024-07-25 09:48:59.544556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.544624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.544655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:34:59.118 [2024-07-25 09:48:59.544668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:34:59.118 [2024-07-25 09:48:59.544678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.544780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.544792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:34:59.118 [2024-07-25 09:48:59.544807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:34:59.118 [2024-07-25 09:48:59.544816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.544845] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:34:59.118 [2024-07-25 09:48:59.551505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.551551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:34:59.118 [2024-07-25 09:48:59.551562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.683 ms 00:34:59.118 [2024-07-25 09:48:59.551574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.551613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.551625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:34:59.118 [2024-07-25 09:48:59.551635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:34:59.118 [2024-07-25 09:48:59.551645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.551692] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:34:59.118 [2024-07-25 09:48:59.551841] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:34:59.118 [2024-07-25 09:48:59.551860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:34:59.118 [2024-07-25 09:48:59.551876] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:34:59.118 [2024-07-25 09:48:59.551888] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:34:59.118 [2024-07-25 09:48:59.551901] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:34:59.118 [2024-07-25 09:48:59.551910] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:34:59.118 [2024-07-25 09:48:59.551922] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:34:59.118 [2024-07-25 09:48:59.551931] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:34:59.118 [2024-07-25 09:48:59.551941] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:34:59.118 [2024-07-25 09:48:59.551950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.551960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:34:59.118 [2024-07-25 09:48:59.551968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:34:59.118 [2024-07-25 09:48:59.551978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.552061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.118 [2024-07-25 09:48:59.552086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:34:59.118 [2024-07-25 09:48:59.552099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:34:59.118 [2024-07-25 09:48:59.552117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.118 [2024-07-25 09:48:59.552271] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:34:59.118 [2024-07-25 09:48:59.552298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:34:59.118 [2024-07-25 09:48:59.552313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552329] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552342] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:34:59.118 [2024-07-25 09:48:59.552358] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:34:59.118 [2024-07-25 09:48:59.552407] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:34:59.118 [2024-07-25 09:48:59.552419] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:34:59.118 [2024-07-25 09:48:59.552435] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552446] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:34:59.118 [2024-07-25 09:48:59.552461] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:34:59.118 [2024-07-25 09:48:59.552471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552487] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:34:59.118 [2024-07-25 09:48:59.552498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:34:59.118 [2024-07-25 09:48:59.552512] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:34:59.118 [2024-07-25 09:48:59.552539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:34:59.118 [2024-07-25 09:48:59.552550] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552563] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:34:59.118 [2024-07-25 09:48:59.552574] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:34:59.118 [2024-07-25 09:48:59.552587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:34:59.118 [2024-07-25 09:48:59.552612] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:34:59.118 [2024-07-25 09:48:59.552623] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552650] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:34:59.118 [2024-07-25 09:48:59.552663] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:34:59.118 [2024-07-25 09:48:59.552695] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552709] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:34:59.118 [2024-07-25 09:48:59.552724] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:34:59.118 [2024-07-25 09:48:59.552738] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:34:59.118 [2024-07-25 09:48:59.552766] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:34:59.118 [2024-07-25 09:48:59.552783] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:34:59.118 [2024-07-25 09:48:59.552810] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552822] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:34:59.118 [2024-07-25 09:48:59.552851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552882] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:34:59.118 [2024-07-25 09:48:59.552897] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:34:59.118 [2024-07-25 09:48:59.552910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552926] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:34:59.118 [2024-07-25 09:48:59.552941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:34:59.118 [2024-07-25 09:48:59.552957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:34:59.118 [2024-07-25 09:48:59.552970] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:34:59.118 [2024-07-25 09:48:59.552986] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:34:59.118 [2024-07-25 09:48:59.553000] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:34:59.118 [2024-07-25 09:48:59.553019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:34:59.118 [2024-07-25 09:48:59.553033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:34:59.118 [2024-07-25 09:48:59.553049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:34:59.118 [2024-07-25 09:48:59.553062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:34:59.118 [2024-07-25 09:48:59.553085] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:34:59.118 [2024-07-25 09:48:59.553108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:59.118 [2024-07-25 09:48:59.553127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:34:59.118 [2024-07-25 09:48:59.553141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:34:59.118 [2024-07-25 09:48:59.553158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:34:59.118 [2024-07-25 09:48:59.553172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:34:59.118 [2024-07-25 09:48:59.553188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:34:59.118 [2024-07-25 09:48:59.553202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:34:59.118 [2024-07-25 09:48:59.553220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:34:59.119 [2024-07-25 09:48:59.553234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:34:59.119 [2024-07-25 09:48:59.553364] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:34:59.119 [2024-07-25 09:48:59.553379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:59.119 [2024-07-25 09:48:59.553414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:34:59.119 [2024-07-25 09:48:59.553431] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:34:59.119 [2024-07-25 09:48:59.553444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:34:59.119 [2024-07-25 09:48:59.553463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:34:59.119 [2024-07-25 09:48:59.553477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:34:59.119 [2024-07-25 09:48:59.553495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.294 ms 00:34:59.119 [2024-07-25 09:48:59.553508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:34:59.119 [2024-07-25 09:48:59.553591] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:34:59.119 [2024-07-25 09:48:59.553608] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:02.406 [2024-07-25 09:49:02.750377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.750444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:02.406 [2024-07-25 09:49:02.750463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3202.941 ms 00:35:02.406 [2024-07-25 09:49:02.750471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.798258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.798317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:02.406 [2024-07-25 09:49:02.798351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.576 ms 00:35:02.406 [2024-07-25 09:49:02.798360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.798484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.798496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:02.406 [2024-07-25 09:49:02.798512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:02.406 [2024-07-25 09:49:02.798521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.853925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.853976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:02.406 [2024-07-25 09:49:02.853992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.450 ms 00:35:02.406 [2024-07-25 09:49:02.854001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.854058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.854067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:02.406 [2024-07-25 09:49:02.854081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:35:02.406 [2024-07-25 09:49:02.854088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.854609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.854622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:02.406 [2024-07-25 09:49:02.854633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.453 ms 00:35:02.406 [2024-07-25 09:49:02.854641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.854707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.854722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:02.406 [2024-07-25 09:49:02.854732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:35:02.406 [2024-07-25 09:49:02.854739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.878516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.878569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:02.406 [2024-07-25 09:49:02.878584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.794 ms 00:35:02.406 [2024-07-25 09:49:02.878592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.894564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:02.406 [2024-07-25 09:49:02.895668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.895696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:02.406 [2024-07-25 09:49:02.895711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.981 ms 00:35:02.406 [2024-07-25 09:49:02.895722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.939516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.939608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:35:02.406 [2024-07-25 09:49:02.939625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.821 ms 00:35:02.406 [2024-07-25 09:49:02.939636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.939760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.939774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:02.406 [2024-07-25 09:49:02.939783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:35:02.406 [2024-07-25 09:49:02.939797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.406 [2024-07-25 09:49:02.985055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.406 [2024-07-25 09:49:02.985129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:35:02.406 [2024-07-25 09:49:02.985146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.273 ms 00:35:02.406 [2024-07-25 09:49:02.985160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.699 [2024-07-25 09:49:03.030706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.699 [2024-07-25 09:49:03.030794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:35:02.699 [2024-07-25 09:49:03.030809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.538 ms 00:35:02.699 [2024-07-25 09:49:03.030819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.699 [2024-07-25 09:49:03.031730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.699 [2024-07-25 09:49:03.031754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:02.700 [2024-07-25 09:49:03.031767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:35:02.700 [2024-07-25 09:49:03.031777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.155415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.700 [2024-07-25 09:49:03.155489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:35:02.700 [2024-07-25 09:49:03.155506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 123.788 ms 00:35:02.700 [2024-07-25 09:49:03.155519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.205731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.700 [2024-07-25 09:49:03.205805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:35:02.700 [2024-07-25 09:49:03.205822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.236 ms 00:35:02.700 [2024-07-25 09:49:03.205833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.256137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.700 [2024-07-25 09:49:03.256206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:35:02.700 [2024-07-25 09:49:03.256255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 50.313 ms 00:35:02.700 [2024-07-25 09:49:03.256271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.301121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.700 [2024-07-25 09:49:03.301192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:02.700 [2024-07-25 09:49:03.301207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.838 ms 00:35:02.700 [2024-07-25 09:49:03.301217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.301318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.700 [2024-07-25 09:49:03.301332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:02.700 [2024-07-25 09:49:03.301342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:02.700 [2024-07-25 09:49:03.301355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.301481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:02.700 [2024-07-25 09:49:03.301498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:02.700 [2024-07-25 09:49:03.301507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:35:02.700 [2024-07-25 09:49:03.301517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:02.700 [2024-07-25 09:49:03.302776] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3776.098 ms, result 0 00:35:02.700 { 00:35:02.700 "name": "ftl", 00:35:02.700 "uuid": "2df06f8c-c9ac-42e1-b7ad-26fffdfd390a" 00:35:02.700 } 00:35:02.958 09:49:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:35:02.958 [2024-07-25 09:49:03.517426] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.958 09:49:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:35:03.216 09:49:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:35:03.475 [2024-07-25 09:49:03.889104] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:03.475 09:49:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:35:03.734 [2024-07-25 09:49:04.091432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:03.734 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:03.993 Fill FTL, iteration 1 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84975 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84975 /var/tmp/spdk.tgt.sock 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 84975 ']' 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:03.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:03.993 09:49:04 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:03.993 [2024-07-25 09:49:04.544243] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:03.993 [2024-07-25 09:49:04.544352] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84975 ] 00:35:04.252 [2024-07-25 09:49:04.708455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:04.510 [2024-07-25 09:49:04.952520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.442 09:49:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:05.442 09:49:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:35:05.442 09:49:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:35:05.700 ftln1 00:35:05.700 09:49:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:35:05.700 09:49:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:35:05.959 09:49:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:35:05.959 09:49:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84975 00:35:05.959 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84975 ']' 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84975 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84975 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:35:05.960 killing process with pid 84975 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84975' 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84975 00:35:05.960 09:49:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84975 00:35:09.247 09:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:35:09.247 09:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:35:09.247 [2024-07-25 09:49:09.363553] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:09.247 [2024-07-25 09:49:09.363683] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85034 ] 00:35:09.247 [2024-07-25 09:49:09.528002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.247 [2024-07-25 09:49:09.792773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.109  Copying: 234/1024 [MB] (234 MBps) Copying: 469/1024 [MB] (235 MBps) Copying: 703/1024 [MB] (234 MBps) Copying: 936/1024 [MB] (233 MBps) Copying: 1024/1024 [MB] (average 234 MBps) 00:35:16.109 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:35:16.109 Calculate MD5 checksum, iteration 1 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:16.109 09:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:16.109 [2024-07-25 09:49:16.426441] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:16.109 [2024-07-25 09:49:16.426588] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85108 ] 00:35:16.109 [2024-07-25 09:49:16.595084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.368 [2024-07-25 09:49:16.858310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:20.210  Copying: 544/1024 [MB] (544 MBps) Copying: 1024/1024 [MB] (average 545 MBps) 00:35:20.210 00:35:20.210 09:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:35:20.210 09:49:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:22.167 Fill FTL, iteration 2 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2d14400c1bbf6e8131efcca0ce213843 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:22.167 09:49:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:35:22.167 [2024-07-25 09:49:22.679492] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:22.167 [2024-07-25 09:49:22.679617] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85171 ] 00:35:22.426 [2024-07-25 09:49:22.846859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.685 [2024-07-25 09:49:23.114046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.526  Copying: 229/1024 [MB] (229 MBps) Copying: 465/1024 [MB] (236 MBps) Copying: 686/1024 [MB] (221 MBps) Copying: 906/1024 [MB] (220 MBps) Copying: 1024/1024 [MB] (average 226 MBps) 00:35:29.526 00:35:29.526 Calculate MD5 checksum, iteration 2 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:29.526 09:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:35:29.526 [2024-07-25 09:49:29.812941] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:29.526 [2024-07-25 09:49:29.813079] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85247 ] 00:35:29.526 [2024-07-25 09:49:29.980412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.784 [2024-07-25 09:49:30.238275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:34.205  Copying: 607/1024 [MB] (607 MBps) Copying: 1024/1024 [MB] (average 576 MBps) 00:35:34.205 00:35:34.205 09:49:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:35:34.205 09:49:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:35:36.106 09:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:35:36.106 09:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f1280d6d679416f47f92616450bee902 00:35:36.106 09:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:35:36.106 09:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:35:36.106 09:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:36.366 [2024-07-25 09:49:36.816091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.366 [2024-07-25 09:49:36.816149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:36.366 [2024-07-25 09:49:36.816167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:36.366 [2024-07-25 09:49:36.816180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.366 [2024-07-25 09:49:36.816214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.366 [2024-07-25 09:49:36.816224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:36.366 [2024-07-25 09:49:36.816249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:36.366 [2024-07-25 09:49:36.816258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.366 [2024-07-25 09:49:36.816293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.366 [2024-07-25 09:49:36.816302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:36.366 [2024-07-25 09:49:36.816311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:36.366 [2024-07-25 09:49:36.816333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.366 [2024-07-25 09:49:36.816403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.306 ms, result 0 00:35:36.366 true 00:35:36.366 09:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:36.626 { 00:35:36.626 "name": "ftl", 00:35:36.626 "properties": [ 00:35:36.626 { 00:35:36.626 "name": "superblock_version", 00:35:36.626 "value": 5, 00:35:36.626 "read-only": true 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "name": "base_device", 00:35:36.626 "bands": [ 00:35:36.626 { 00:35:36.626 "id": 0, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 1, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 2, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 3, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 4, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 5, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 6, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 7, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 8, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 9, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 10, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 11, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 12, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 13, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 14, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 15, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 16, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 17, 00:35:36.626 "state": "FREE", 00:35:36.626 "validity": 0.0 00:35:36.626 } 00:35:36.626 ], 00:35:36.626 "read-only": true 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "name": "cache_device", 00:35:36.626 "type": "bdev", 00:35:36.626 "chunks": [ 00:35:36.626 { 00:35:36.626 "id": 0, 00:35:36.626 "state": "INACTIVE", 00:35:36.626 "utilization": 0.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 1, 00:35:36.626 "state": "CLOSED", 00:35:36.626 "utilization": 1.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 2, 00:35:36.626 "state": "CLOSED", 00:35:36.626 "utilization": 1.0 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 3, 00:35:36.626 "state": "OPEN", 00:35:36.626 "utilization": 0.001953125 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "id": 4, 00:35:36.626 "state": "OPEN", 00:35:36.626 "utilization": 0.0 00:35:36.626 } 00:35:36.626 ], 00:35:36.626 "read-only": true 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "name": "verbose_mode", 00:35:36.626 "value": true, 00:35:36.626 "unit": "", 00:35:36.626 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:36.626 }, 00:35:36.626 { 00:35:36.626 "name": "prep_upgrade_on_shutdown", 00:35:36.626 "value": false, 00:35:36.626 "unit": "", 00:35:36.626 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:36.626 } 00:35:36.626 ] 00:35:36.626 } 00:35:36.626 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:35:36.885 [2024-07-25 09:49:37.274322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.885 [2024-07-25 09:49:37.274379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:36.885 [2024-07-25 09:49:37.274394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:35:36.885 [2024-07-25 09:49:37.274403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.885 [2024-07-25 09:49:37.274433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.885 [2024-07-25 09:49:37.274444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:36.885 [2024-07-25 09:49:37.274453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:36.885 [2024-07-25 09:49:37.274462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.885 [2024-07-25 09:49:37.274482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:36.885 [2024-07-25 09:49:37.274491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:36.885 [2024-07-25 09:49:37.274499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:36.885 [2024-07-25 09:49:37.274507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:36.885 [2024-07-25 09:49:37.274571] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.244 ms, result 0 00:35:36.885 true 00:35:36.885 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:36.886 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:35:36.886 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:37.145 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:35:37.145 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:35:37.145 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:37.145 [2024-07-25 09:49:37.713214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.145 [2024-07-25 09:49:37.713293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:37.145 [2024-07-25 09:49:37.713309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:35:37.145 [2024-07-25 09:49:37.713318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.145 [2024-07-25 09:49:37.713347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.145 [2024-07-25 09:49:37.713356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:37.145 [2024-07-25 09:49:37.713366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:37.145 [2024-07-25 09:49:37.713373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.145 [2024-07-25 09:49:37.713394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:37.145 [2024-07-25 09:49:37.713402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:37.145 [2024-07-25 09:49:37.713411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:35:37.145 [2024-07-25 09:49:37.713418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:37.145 [2024-07-25 09:49:37.713483] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:35:37.145 true 00:35:37.145 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:37.405 { 00:35:37.405 "name": "ftl", 00:35:37.405 "properties": [ 00:35:37.405 { 00:35:37.405 "name": "superblock_version", 00:35:37.405 "value": 5, 00:35:37.405 "read-only": true 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "name": "base_device", 00:35:37.405 "bands": [ 00:35:37.405 { 00:35:37.405 "id": 0, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 1, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 2, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 3, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 4, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 5, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 6, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 7, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 8, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 9, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 10, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 11, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 12, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 13, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 14, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 15, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 16, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 17, 00:35:37.405 "state": "FREE", 00:35:37.405 "validity": 0.0 00:35:37.405 } 00:35:37.405 ], 00:35:37.405 "read-only": true 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "name": "cache_device", 00:35:37.405 "type": "bdev", 00:35:37.405 "chunks": [ 00:35:37.405 { 00:35:37.405 "id": 0, 00:35:37.405 "state": "INACTIVE", 00:35:37.405 "utilization": 0.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 1, 00:35:37.405 "state": "CLOSED", 00:35:37.405 "utilization": 1.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 2, 00:35:37.405 "state": "CLOSED", 00:35:37.405 "utilization": 1.0 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 3, 00:35:37.405 "state": "OPEN", 00:35:37.405 "utilization": 0.001953125 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "id": 4, 00:35:37.405 "state": "OPEN", 00:35:37.405 "utilization": 0.0 00:35:37.405 } 00:35:37.405 ], 00:35:37.405 "read-only": true 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "name": "verbose_mode", 00:35:37.405 "value": true, 00:35:37.405 "unit": "", 00:35:37.405 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:37.405 }, 00:35:37.405 { 00:35:37.405 "name": "prep_upgrade_on_shutdown", 00:35:37.405 "value": true, 00:35:37.405 "unit": "", 00:35:37.405 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:37.405 } 00:35:37.405 ] 00:35:37.405 } 00:35:37.405 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84847 ]] 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84847 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 84847 ']' 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 84847 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84847 00:35:37.406 killing process with pid 84847 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84847' 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 84847 00:35:37.406 09:49:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 84847 00:35:38.785 [2024-07-25 09:49:39.205710] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:35:38.785 [2024-07-25 09:49:39.224644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:38.785 [2024-07-25 09:49:39.224716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:35:38.785 [2024-07-25 09:49:39.224745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:35:38.785 [2024-07-25 09:49:39.224754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:38.785 [2024-07-25 09:49:39.224777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:35:38.785 [2024-07-25 09:49:39.228970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:38.785 [2024-07-25 09:49:39.228999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:35:38.785 [2024-07-25 09:49:39.229015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.185 ms 00:35:38.785 [2024-07-25 09:49:39.229023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:46.980365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:46.980441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:35:46.951 [2024-07-25 09:49:46.980457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7766.258 ms 00:35:46.951 [2024-07-25 09:49:46.980465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:46.981733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:46.981762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:35:46.951 [2024-07-25 09:49:46.981773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.251 ms 00:35:46.951 [2024-07-25 09:49:46.981782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:46.982885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:46.982912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:35:46.951 [2024-07-25 09:49:46.982927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.063 ms 00:35:46.951 [2024-07-25 09:49:46.982934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:46.999680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:46.999726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:35:46.951 [2024-07-25 09:49:46.999737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.740 ms 00:35:46.951 [2024-07-25 09:49:46.999744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.009651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:47.009701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:35:46.951 [2024-07-25 09:49:47.009713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.871 ms 00:35:46.951 [2024-07-25 09:49:47.009721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.009828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:47.009840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:35:46.951 [2024-07-25 09:49:47.009850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:35:46.951 [2024-07-25 09:49:47.009857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.027716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:47.027768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:35:46.951 [2024-07-25 09:49:47.027783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.875 ms 00:35:46.951 [2024-07-25 09:49:47.027791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.046114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:47.046161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:35:46.951 [2024-07-25 09:49:47.046174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.317 ms 00:35:46.951 [2024-07-25 09:49:47.046181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.063558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:47.063605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:35:46.951 [2024-07-25 09:49:47.063619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.374 ms 00:35:46.951 [2024-07-25 09:49:47.063628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.081546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.951 [2024-07-25 09:49:47.081596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:35:46.951 [2024-07-25 09:49:47.081609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.862 ms 00:35:46.951 [2024-07-25 09:49:47.081617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.951 [2024-07-25 09:49:47.081654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:35:46.951 [2024-07-25 09:49:47.081670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:35:46.951 [2024-07-25 09:49:47.081692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:35:46.951 [2024-07-25 09:49:47.081701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:35:46.951 [2024-07-25 09:49:47.081711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:46.951 [2024-07-25 09:49:47.081820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:46.952 [2024-07-25 09:49:47.081829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:46.952 [2024-07-25 09:49:47.081837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:46.952 [2024-07-25 09:49:47.081845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:46.952 [2024-07-25 09:49:47.081854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:46.952 [2024-07-25 09:49:47.081866] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:35:46.952 [2024-07-25 09:49:47.081875] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2df06f8c-c9ac-42e1-b7ad-26fffdfd390a 00:35:46.952 [2024-07-25 09:49:47.081883] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:35:46.952 [2024-07-25 09:49:47.081892] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:35:46.952 [2024-07-25 09:49:47.081904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:35:46.952 [2024-07-25 09:49:47.081914] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:35:46.952 [2024-07-25 09:49:47.081923] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:35:46.952 [2024-07-25 09:49:47.081931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:35:46.952 [2024-07-25 09:49:47.081939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:35:46.952 [2024-07-25 09:49:47.081946] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:35:46.952 [2024-07-25 09:49:47.081954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:35:46.952 [2024-07-25 09:49:47.081963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.952 [2024-07-25 09:49:47.081971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:35:46.952 [2024-07-25 09:49:47.081980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.310 ms 00:35:46.952 [2024-07-25 09:49:47.081987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.105690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.952 [2024-07-25 09:49:47.105751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:35:46.952 [2024-07-25 09:49:47.105766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.710 ms 00:35:46.952 [2024-07-25 09:49:47.105776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.106364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:46.952 [2024-07-25 09:49:47.106380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:35:46.952 [2024-07-25 09:49:47.106390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.544 ms 00:35:46.952 [2024-07-25 09:49:47.106398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.172579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.172638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:46.952 [2024-07-25 09:49:47.172656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.172664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.172732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.172740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:46.952 [2024-07-25 09:49:47.172747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.172759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.172874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.172890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:46.952 [2024-07-25 09:49:47.172899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.172906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.172923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.172931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:46.952 [2024-07-25 09:49:47.172938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.172946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.302352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.302418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:46.952 [2024-07-25 09:49:47.302437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.302450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.419662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.419732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:46.952 [2024-07-25 09:49:47.419748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.419768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.419878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.419888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:46.952 [2024-07-25 09:49:47.419905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.419912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.419983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.419993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:46.952 [2024-07-25 09:49:47.420003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.420010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.420124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.420144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:46.952 [2024-07-25 09:49:47.420153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.420166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.420204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.420215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:35:46.952 [2024-07-25 09:49:47.420224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.420250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.420292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.420301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:46.952 [2024-07-25 09:49:47.420310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.420322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.420370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:35:46.952 [2024-07-25 09:49:47.420380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:46.952 [2024-07-25 09:49:47.420389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:35:46.952 [2024-07-25 09:49:47.420396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:46.952 [2024-07-25 09:49:47.420527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8211.659 ms, result 0 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85498 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85498 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85498 ']' 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:52.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:35:52.315 09:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:35:52.315 [2024-07-25 09:49:52.820673] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:52.315 [2024-07-25 09:49:52.820817] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85498 ] 00:35:52.575 [2024-07-25 09:49:52.988837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.835 [2024-07-25 09:49:53.243580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.773 [2024-07-25 09:49:54.267580] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:53.773 [2024-07-25 09:49:54.267643] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:35:54.031 [2024-07-25 09:49:54.413219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.413285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:35:54.031 [2024-07-25 09:49:54.413299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:54.031 [2024-07-25 09:49:54.413307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.413363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.413373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:35:54.031 [2024-07-25 09:49:54.413381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:35:54.031 [2024-07-25 09:49:54.413388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.413411] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:35:54.031 [2024-07-25 09:49:54.414536] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:35:54.031 [2024-07-25 09:49:54.414563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.414572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:35:54.031 [2024-07-25 09:49:54.414581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.161 ms 00:35:54.031 [2024-07-25 09:49:54.414591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.416002] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:35:54.031 [2024-07-25 09:49:54.435605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.435645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:35:54.031 [2024-07-25 09:49:54.435657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.640 ms 00:35:54.031 [2024-07-25 09:49:54.435666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.435757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.435767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:35:54.031 [2024-07-25 09:49:54.435776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:35:54.031 [2024-07-25 09:49:54.435784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.443038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.443071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:35:54.031 [2024-07-25 09:49:54.443081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.184 ms 00:35:54.031 [2024-07-25 09:49:54.443088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.443181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.443199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:35:54.031 [2024-07-25 09:49:54.443211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:35:54.031 [2024-07-25 09:49:54.443218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.031 [2024-07-25 09:49:54.443291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.031 [2024-07-25 09:49:54.443302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:35:54.032 [2024-07-25 09:49:54.443310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:54.032 [2024-07-25 09:49:54.443318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.032 [2024-07-25 09:49:54.443346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:35:54.032 [2024-07-25 09:49:54.449180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.032 [2024-07-25 09:49:54.449215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:35:54.032 [2024-07-25 09:49:54.449226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.854 ms 00:35:54.032 [2024-07-25 09:49:54.449245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.032 [2024-07-25 09:49:54.449279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.032 [2024-07-25 09:49:54.449289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:35:54.032 [2024-07-25 09:49:54.449302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:54.032 [2024-07-25 09:49:54.449310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.032 [2024-07-25 09:49:54.449371] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:35:54.032 [2024-07-25 09:49:54.449395] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:35:54.032 [2024-07-25 09:49:54.449432] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:35:54.032 [2024-07-25 09:49:54.449448] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:35:54.032 [2024-07-25 09:49:54.449541] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:35:54.032 [2024-07-25 09:49:54.449555] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:35:54.032 [2024-07-25 09:49:54.449566] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:35:54.032 [2024-07-25 09:49:54.449577] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:35:54.032 [2024-07-25 09:49:54.449587] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:35:54.032 [2024-07-25 09:49:54.449596] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:35:54.032 [2024-07-25 09:49:54.449605] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:35:54.032 [2024-07-25 09:49:54.449613] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:35:54.032 [2024-07-25 09:49:54.449621] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:35:54.032 [2024-07-25 09:49:54.449630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.032 [2024-07-25 09:49:54.449638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:35:54.032 [2024-07-25 09:49:54.449648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.262 ms 00:35:54.032 [2024-07-25 09:49:54.449658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.032 [2024-07-25 09:49:54.449735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.032 [2024-07-25 09:49:54.449748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:35:54.032 [2024-07-25 09:49:54.449757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:35:54.032 [2024-07-25 09:49:54.449764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.032 [2024-07-25 09:49:54.449869] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:35:54.032 [2024-07-25 09:49:54.449878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:35:54.032 [2024-07-25 09:49:54.449886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:54.032 [2024-07-25 09:49:54.449895] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.449906] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:35:54.032 [2024-07-25 09:49:54.449912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.449920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:35:54.032 [2024-07-25 09:49:54.449927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:35:54.032 [2024-07-25 09:49:54.449935] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:35:54.032 [2024-07-25 09:49:54.449941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.449949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:35:54.032 [2024-07-25 09:49:54.449955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:35:54.032 [2024-07-25 09:49:54.449962] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.449969] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:35:54.032 [2024-07-25 09:49:54.449976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:35:54.032 [2024-07-25 09:49:54.449983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.449990] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:35:54.032 [2024-07-25 09:49:54.449996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:35:54.032 [2024-07-25 09:49:54.450003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:35:54.032 [2024-07-25 09:49:54.450016] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:35:54.032 [2024-07-25 09:49:54.450024] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:35:54.032 [2024-07-25 09:49:54.450036] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:35:54.032 [2024-07-25 09:49:54.450043] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:35:54.032 [2024-07-25 09:49:54.450055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:35:54.032 [2024-07-25 09:49:54.450062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:35:54.032 [2024-07-25 09:49:54.450075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:35:54.032 [2024-07-25 09:49:54.450081] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:35:54.032 [2024-07-25 09:49:54.450093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:35:54.032 [2024-07-25 09:49:54.450100] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:35:54.032 [2024-07-25 09:49:54.450113] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450119] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:35:54.032 [2024-07-25 09:49:54.450132] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450144] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:35:54.032 [2024-07-25 09:49:54.450151] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:35:54.032 [2024-07-25 09:49:54.450157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450163] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:35:54.032 [2024-07-25 09:49:54.450171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:35:54.032 [2024-07-25 09:49:54.450178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450185] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:35:54.032 [2024-07-25 09:49:54.450192] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:35:54.032 [2024-07-25 09:49:54.450199] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:35:54.032 [2024-07-25 09:49:54.450206] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:35:54.032 [2024-07-25 09:49:54.450213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:35:54.032 [2024-07-25 09:49:54.450231] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:35:54.032 [2024-07-25 09:49:54.450248] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:35:54.032 [2024-07-25 09:49:54.450258] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:35:54.032 [2024-07-25 09:49:54.450267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:35:54.032 [2024-07-25 09:49:54.450282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:35:54.032 [2024-07-25 09:49:54.450304] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:35:54.032 [2024-07-25 09:49:54.450311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:35:54.032 [2024-07-25 09:49:54.450318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:35:54.032 [2024-07-25 09:49:54.450325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:35:54.032 [2024-07-25 09:49:54.450375] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:35:54.032 [2024-07-25 09:49:54.450382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:54.032 [2024-07-25 09:49:54.450397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:35:54.032 [2024-07-25 09:49:54.450404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:35:54.032 [2024-07-25 09:49:54.450411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:35:54.032 [2024-07-25 09:49:54.450420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:54.032 [2024-07-25 09:49:54.450428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:35:54.032 [2024-07-25 09:49:54.450435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:35:54.032 [2024-07-25 09:49:54.450445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:54.032 [2024-07-25 09:49:54.450494] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:35:54.032 [2024-07-25 09:49:54.450503] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:35:57.321 [2024-07-25 09:49:57.518490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.518558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:35:57.321 [2024-07-25 09:49:57.518573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3073.910 ms 00:35:57.321 [2024-07-25 09:49:57.518590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.568741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.568803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:35:57.321 [2024-07-25 09:49:57.568818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.924 ms 00:35:57.321 [2024-07-25 09:49:57.568828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.568963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.568976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:35:57.321 [2024-07-25 09:49:57.568986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:35:57.321 [2024-07-25 09:49:57.568995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.626618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.626691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:35:57.321 [2024-07-25 09:49:57.626711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 57.687 ms 00:35:57.321 [2024-07-25 09:49:57.626725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.626795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.626808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:35:57.321 [2024-07-25 09:49:57.626823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:57.321 [2024-07-25 09:49:57.626836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.627591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.627617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:35:57.321 [2024-07-25 09:49:57.627632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:35:57.321 [2024-07-25 09:49:57.627645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.627712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.627728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:35:57.321 [2024-07-25 09:49:57.627743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:35:57.321 [2024-07-25 09:49:57.627755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.653555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.653619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:35:57.321 [2024-07-25 09:49:57.653635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.817 ms 00:35:57.321 [2024-07-25 09:49:57.653644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.679056] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:35:57.321 [2024-07-25 09:49:57.679125] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:35:57.321 [2024-07-25 09:49:57.679144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.679153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:35:57.321 [2024-07-25 09:49:57.679165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.371 ms 00:35:57.321 [2024-07-25 09:49:57.679174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.705135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.705217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:35:57.321 [2024-07-25 09:49:57.705243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.908 ms 00:35:57.321 [2024-07-25 09:49:57.705254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.729101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.729176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:35:57.321 [2024-07-25 09:49:57.729191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.772 ms 00:35:57.321 [2024-07-25 09:49:57.729201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.753573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.753663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:35:57.321 [2024-07-25 09:49:57.753689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.304 ms 00:35:57.321 [2024-07-25 09:49:57.753704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.754848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.754893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:35:57.321 [2024-07-25 09:49:57.754910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.896 ms 00:35:57.321 [2024-07-25 09:49:57.754919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.321 [2024-07-25 09:49:57.876445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.321 [2024-07-25 09:49:57.876525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:35:57.321 [2024-07-25 09:49:57.876542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 121.721 ms 00:35:57.322 [2024-07-25 09:49:57.876552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.322 [2024-07-25 09:49:57.893257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:35:57.322 [2024-07-25 09:49:57.894485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.322 [2024-07-25 09:49:57.894514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:35:57.322 [2024-07-25 09:49:57.894535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.872 ms 00:35:57.322 [2024-07-25 09:49:57.894544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.322 [2024-07-25 09:49:57.894690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.322 [2024-07-25 09:49:57.894713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:35:57.322 [2024-07-25 09:49:57.894728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:35:57.322 [2024-07-25 09:49:57.894737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.322 [2024-07-25 09:49:57.894807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.322 [2024-07-25 09:49:57.894821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:35:57.322 [2024-07-25 09:49:57.894831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:35:57.322 [2024-07-25 09:49:57.894844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.322 [2024-07-25 09:49:57.894872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.322 [2024-07-25 09:49:57.894882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:35:57.322 [2024-07-25 09:49:57.894891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:35:57.322 [2024-07-25 09:49:57.894900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.322 [2024-07-25 09:49:57.894935] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:35:57.322 [2024-07-25 09:49:57.894946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.322 [2024-07-25 09:49:57.894954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:35:57.322 [2024-07-25 09:49:57.894964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:35:57.322 [2024-07-25 09:49:57.894973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.582 [2024-07-25 09:49:57.944291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.582 [2024-07-25 09:49:57.944361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:35:57.582 [2024-07-25 09:49:57.944378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.378 ms 00:35:57.582 [2024-07-25 09:49:57.944388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.582 [2024-07-25 09:49:57.944534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.582 [2024-07-25 09:49:57.944548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:35:57.582 [2024-07-25 09:49:57.944558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:35:57.582 [2024-07-25 09:49:57.944578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.582 [2024-07-25 09:49:57.945960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3539.004 ms, result 0 00:35:57.582 [2024-07-25 09:49:57.960812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.582 [2024-07-25 09:49:57.976837] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:35:57.582 [2024-07-25 09:49:57.988141] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.582 09:49:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:35:57.582 09:49:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:35:57.582 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:35:57.582 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:35:57.582 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:35:57.842 [2024-07-25 09:49:58.227769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.842 [2024-07-25 09:49:58.227846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:35:57.842 [2024-07-25 09:49:58.227861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:35:57.842 [2024-07-25 09:49:58.227870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.842 [2024-07-25 09:49:58.227904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.842 [2024-07-25 09:49:58.227913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:35:57.842 [2024-07-25 09:49:58.227922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:57.842 [2024-07-25 09:49:58.227932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.842 [2024-07-25 09:49:58.227954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:35:57.842 [2024-07-25 09:49:58.227964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:35:57.842 [2024-07-25 09:49:58.227973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:35:57.842 [2024-07-25 09:49:58.227985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:35:57.842 [2024-07-25 09:49:58.228050] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.287 ms, result 0 00:35:57.842 true 00:35:57.842 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:58.102 { 00:35:58.102 "name": "ftl", 00:35:58.102 "properties": [ 00:35:58.102 { 00:35:58.102 "name": "superblock_version", 00:35:58.102 "value": 5, 00:35:58.102 "read-only": true 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "name": "base_device", 00:35:58.103 "bands": [ 00:35:58.103 { 00:35:58.103 "id": 0, 00:35:58.103 "state": "CLOSED", 00:35:58.103 "validity": 1.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 1, 00:35:58.103 "state": "CLOSED", 00:35:58.103 "validity": 1.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 2, 00:35:58.103 "state": "CLOSED", 00:35:58.103 "validity": 0.007843137254901933 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 3, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 4, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 5, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 6, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 7, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 8, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 9, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 10, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 11, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 12, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 13, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 14, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 15, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 16, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 17, 00:35:58.103 "state": "FREE", 00:35:58.103 "validity": 0.0 00:35:58.103 } 00:35:58.103 ], 00:35:58.103 "read-only": true 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "name": "cache_device", 00:35:58.103 "type": "bdev", 00:35:58.103 "chunks": [ 00:35:58.103 { 00:35:58.103 "id": 0, 00:35:58.103 "state": "INACTIVE", 00:35:58.103 "utilization": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 1, 00:35:58.103 "state": "OPEN", 00:35:58.103 "utilization": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 2, 00:35:58.103 "state": "OPEN", 00:35:58.103 "utilization": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 3, 00:35:58.103 "state": "FREE", 00:35:58.103 "utilization": 0.0 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "id": 4, 00:35:58.103 "state": "FREE", 00:35:58.103 "utilization": 0.0 00:35:58.103 } 00:35:58.103 ], 00:35:58.103 "read-only": true 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "name": "verbose_mode", 00:35:58.103 "value": true, 00:35:58.103 "unit": "", 00:35:58.103 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:35:58.103 }, 00:35:58.103 { 00:35:58.103 "name": "prep_upgrade_on_shutdown", 00:35:58.103 "value": false, 00:35:58.103 "unit": "", 00:35:58.103 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:35:58.103 } 00:35:58.103 ] 00:35:58.103 } 00:35:58.103 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:35:58.103 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:35:58.103 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:58.363 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:35:58.363 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:35:58.363 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:35:58.363 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:35:58.363 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:35:58.624 Validate MD5 checksum, iteration 1 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:35:58.624 09:49:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:35:58.624 [2024-07-25 09:49:59.081432] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:35:58.624 [2024-07-25 09:49:59.081595] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85579 ] 00:35:58.883 [2024-07-25 09:49:59.254352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.143 [2024-07-25 09:49:59.518792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.534  Copying: 602/1024 [MB] (602 MBps) Copying: 1024/1024 [MB] (average 582 MBps) 00:36:03.534 00:36:03.534 09:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:03.534 09:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:05.438 Validate MD5 checksum, iteration 2 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2d14400c1bbf6e8131efcca0ce213843 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2d14400c1bbf6e8131efcca0ce213843 != \2\d\1\4\4\0\0\c\1\b\b\f\6\e\8\1\3\1\e\f\c\c\a\0\c\e\2\1\3\8\4\3 ]] 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:05.438 09:50:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:05.438 [2024-07-25 09:50:05.953065] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:05.438 [2024-07-25 09:50:05.953176] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85652 ] 00:36:05.696 [2024-07-25 09:50:06.117284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.954 [2024-07-25 09:50:06.379316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.325  Copying: 598/1024 [MB] (598 MBps) Copying: 1024/1024 [MB] (average 590 MBps) 00:36:10.325 00:36:10.325 09:50:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:10.325 09:50:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1280d6d679416f47f92616450bee902 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1280d6d679416f47f92616450bee902 != \f\1\2\8\0\d\6\d\6\7\9\4\1\6\f\4\7\f\9\2\6\1\6\4\5\0\b\e\e\9\0\2 ]] 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85498 ]] 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85498 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85719 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85719 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 85719 ']' 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:12.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:12.228 09:50:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:12.228 [2024-07-25 09:50:12.604116] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:12.228 [2024-07-25 09:50:12.604300] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85719 ] 00:36:12.228 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 85498 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:36:12.228 [2024-07-25 09:50:12.760438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.487 [2024-07-25 09:50:12.999475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.864 [2024-07-25 09:50:14.044748] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:13.864 [2024-07-25 09:50:14.044819] bdev.c:8190:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:36:13.864 [2024-07-25 09:50:14.192541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.192608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:36:13.864 [2024-07-25 09:50:14.192623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:36:13.864 [2024-07-25 09:50:14.192632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.192708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.192720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:13.864 [2024-07-25 09:50:14.192730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:36:13.864 [2024-07-25 09:50:14.192738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.192766] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:36:13.864 [2024-07-25 09:50:14.194052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:36:13.864 [2024-07-25 09:50:14.194083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.194092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:13.864 [2024-07-25 09:50:14.194102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.328 ms 00:36:13.864 [2024-07-25 09:50:14.194114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.194521] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:36:13.864 [2024-07-25 09:50:14.223537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.223606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:36:13.864 [2024-07-25 09:50:14.223631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.068 ms 00:36:13.864 [2024-07-25 09:50:14.223641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.240716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.240781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:36:13.864 [2024-07-25 09:50:14.240794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.092 ms 00:36:13.864 [2024-07-25 09:50:14.240803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.241259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.241278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:13.864 [2024-07-25 09:50:14.241287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:36:13.864 [2024-07-25 09:50:14.241296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.241362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.864 [2024-07-25 09:50:14.241376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:13.864 [2024-07-25 09:50:14.241385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:36:13.864 [2024-07-25 09:50:14.241393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.864 [2024-07-25 09:50:14.241432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.865 [2024-07-25 09:50:14.241442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:36:13.865 [2024-07-25 09:50:14.241453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:36:13.865 [2024-07-25 09:50:14.241461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.865 [2024-07-25 09:50:14.241490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:36:13.865 [2024-07-25 09:50:14.247650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.865 [2024-07-25 09:50:14.247688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:13.865 [2024-07-25 09:50:14.247699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.179 ms 00:36:13.865 [2024-07-25 09:50:14.247723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.865 [2024-07-25 09:50:14.247763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.865 [2024-07-25 09:50:14.247772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:36:13.865 [2024-07-25 09:50:14.247781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:13.865 [2024-07-25 09:50:14.247789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.865 [2024-07-25 09:50:14.247845] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:36:13.865 [2024-07-25 09:50:14.247869] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:36:13.865 [2024-07-25 09:50:14.247910] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:36:13.865 [2024-07-25 09:50:14.247926] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:36:13.865 [2024-07-25 09:50:14.248022] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:36:13.865 [2024-07-25 09:50:14.248057] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:36:13.865 [2024-07-25 09:50:14.248068] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:36:13.865 [2024-07-25 09:50:14.248079] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248088] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248098] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:36:13.865 [2024-07-25 09:50:14.248109] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:36:13.865 [2024-07-25 09:50:14.248117] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:36:13.865 [2024-07-25 09:50:14.248125] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:36:13.865 [2024-07-25 09:50:14.248134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.865 [2024-07-25 09:50:14.248146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:36:13.865 [2024-07-25 09:50:14.248155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:36:13.865 [2024-07-25 09:50:14.248163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.865 [2024-07-25 09:50:14.248254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.865 [2024-07-25 09:50:14.248264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:36:13.865 [2024-07-25 09:50:14.248273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:36:13.865 [2024-07-25 09:50:14.248284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.865 [2024-07-25 09:50:14.248385] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:36:13.865 [2024-07-25 09:50:14.248399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:36:13.865 [2024-07-25 09:50:14.248409] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:36:13.865 [2024-07-25 09:50:14.248436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248444] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:36:13.865 [2024-07-25 09:50:14.248451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:36:13.865 [2024-07-25 09:50:14.248459] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:36:13.865 [2024-07-25 09:50:14.248466] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248473] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:36:13.865 [2024-07-25 09:50:14.248481] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:36:13.865 [2024-07-25 09:50:14.248489] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:36:13.865 [2024-07-25 09:50:14.248504] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:36:13.865 [2024-07-25 09:50:14.248511] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248518] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:36:13.865 [2024-07-25 09:50:14.248525] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:36:13.865 [2024-07-25 09:50:14.248532] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:36:13.865 [2024-07-25 09:50:14.248546] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:36:13.865 [2024-07-25 09:50:14.248553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:36:13.865 [2024-07-25 09:50:14.248567] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:36:13.865 [2024-07-25 09:50:14.248574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:36:13.865 [2024-07-25 09:50:14.248588] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:36:13.865 [2024-07-25 09:50:14.248595] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:36:13.865 [2024-07-25 09:50:14.248609] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:36:13.865 [2024-07-25 09:50:14.248616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248623] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:36:13.865 [2024-07-25 09:50:14.248630] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:36:13.865 [2024-07-25 09:50:14.248637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:36:13.865 [2024-07-25 09:50:14.248652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248671] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:36:13.865 [2024-07-25 09:50:14.248687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:36:13.865 [2024-07-25 09:50:14.248708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:36:13.865 [2024-07-25 09:50:14.248716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248722] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:36:13.865 [2024-07-25 09:50:14.248730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:36:13.865 [2024-07-25 09:50:14.248739] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:36:13.865 [2024-07-25 09:50:14.248755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:36:13.865 [2024-07-25 09:50:14.248763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:36:13.865 [2024-07-25 09:50:14.248785] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:36:13.865 [2024-07-25 09:50:14.248793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:36:13.865 [2024-07-25 09:50:14.248801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:36:13.865 [2024-07-25 09:50:14.248808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:36:13.865 [2024-07-25 09:50:14.248817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:36:13.865 [2024-07-25 09:50:14.248831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:13.865 [2024-07-25 09:50:14.248840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:36:13.865 [2024-07-25 09:50:14.248848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:36:13.865 [2024-07-25 09:50:14.248856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:36:13.865 [2024-07-25 09:50:14.248864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:36:13.865 [2024-07-25 09:50:14.248871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:36:13.865 [2024-07-25 09:50:14.248880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:36:13.865 [2024-07-25 09:50:14.248887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:36:13.865 [2024-07-25 09:50:14.248895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:36:13.865 [2024-07-25 09:50:14.248903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:36:13.865 [2024-07-25 09:50:14.248911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:36:13.865 [2024-07-25 09:50:14.248918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:36:13.866 [2024-07-25 09:50:14.248926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:36:13.866 [2024-07-25 09:50:14.248934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:36:13.866 [2024-07-25 09:50:14.248942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:36:13.866 [2024-07-25 09:50:14.248949] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:36:13.866 [2024-07-25 09:50:14.248958] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:36:13.866 [2024-07-25 09:50:14.248966] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:36:13.866 [2024-07-25 09:50:14.248974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:36:13.866 [2024-07-25 09:50:14.248982] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:36:13.866 [2024-07-25 09:50:14.248990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:36:13.866 [2024-07-25 09:50:14.248998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.249007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:36:13.866 [2024-07-25 09:50:14.249015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.674 ms 00:36:13.866 [2024-07-25 09:50:14.249023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.295646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.295699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:13.866 [2024-07-25 09:50:14.295713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.644 ms 00:36:13.866 [2024-07-25 09:50:14.295722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.295795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.295805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:36:13.866 [2024-07-25 09:50:14.295820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:36:13.866 [2024-07-25 09:50:14.295828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.350341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.350400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:13.866 [2024-07-25 09:50:14.350419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.510 ms 00:36:13.866 [2024-07-25 09:50:14.350433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.350506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.350515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:13.866 [2024-07-25 09:50:14.350525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:36:13.866 [2024-07-25 09:50:14.350533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.350676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.350694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:13.866 [2024-07-25 09:50:14.350704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:36:13.866 [2024-07-25 09:50:14.350712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.350758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.350768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:13.866 [2024-07-25 09:50:14.350777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:36:13.866 [2024-07-25 09:50:14.350786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.376247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.376301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:13.866 [2024-07-25 09:50:14.376328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.485 ms 00:36:13.866 [2024-07-25 09:50:14.376338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.376517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.376533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:36:13.866 [2024-07-25 09:50:14.376543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:36:13.866 [2024-07-25 09:50:14.376555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.418532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.418601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:36:13.866 [2024-07-25 09:50:14.418617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.028 ms 00:36:13.866 [2024-07-25 09:50:14.418633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:13.866 [2024-07-25 09:50:14.435925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:13.866 [2024-07-25 09:50:14.435986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:36:13.866 [2024-07-25 09:50:14.436017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.800 ms 00:36:13.866 [2024-07-25 09:50:14.436026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.125 [2024-07-25 09:50:14.538475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.126 [2024-07-25 09:50:14.538551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:36:14.126 [2024-07-25 09:50:14.538582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 102.537 ms 00:36:14.126 [2024-07-25 09:50:14.538591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.126 [2024-07-25 09:50:14.538836] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:36:14.126 [2024-07-25 09:50:14.538980] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:36:14.126 [2024-07-25 09:50:14.539120] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:36:14.126 [2024-07-25 09:50:14.539296] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:36:14.126 [2024-07-25 09:50:14.539310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.126 [2024-07-25 09:50:14.539319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:36:14.126 [2024-07-25 09:50:14.539333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:36:14.126 [2024-07-25 09:50:14.539342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.126 [2024-07-25 09:50:14.539456] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:36:14.126 [2024-07-25 09:50:14.539469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.126 [2024-07-25 09:50:14.539478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:36:14.126 [2024-07-25 09:50:14.539487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:36:14.126 [2024-07-25 09:50:14.539495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.126 [2024-07-25 09:50:14.567677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.126 [2024-07-25 09:50:14.567769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:36:14.126 [2024-07-25 09:50:14.567786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.207 ms 00:36:14.126 [2024-07-25 09:50:14.567795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.126 [2024-07-25 09:50:14.586128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:14.126 [2024-07-25 09:50:14.586195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:36:14.126 [2024-07-25 09:50:14.586221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:36:14.126 [2024-07-25 09:50:14.586242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:14.126 [2024-07-25 09:50:14.586555] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:36:14.693 [2024-07-25 09:50:15.132607] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:36:14.693 [2024-07-25 09:50:15.132789] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:36:15.261 [2024-07-25 09:50:15.681382] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:36:15.262 [2024-07-25 09:50:15.681505] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:36:15.262 [2024-07-25 09:50:15.681520] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:36:15.262 [2024-07-25 09:50:15.681534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.681544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:36:15.262 [2024-07-25 09:50:15.681559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1097.274 ms 00:36:15.262 [2024-07-25 09:50:15.681568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.681609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.681619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:36:15.262 [2024-07-25 09:50:15.681628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:15.262 [2024-07-25 09:50:15.681649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.698128] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:36:15.262 [2024-07-25 09:50:15.698364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.698377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:36:15.262 [2024-07-25 09:50:15.698389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.726 ms 00:36:15.262 [2024-07-25 09:50:15.698398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.699196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.699220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:36:15.262 [2024-07-25 09:50:15.699250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:36:15.262 [2024-07-25 09:50:15.699264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.701505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.701530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:36:15.262 [2024-07-25 09:50:15.701540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.217 ms 00:36:15.262 [2024-07-25 09:50:15.701548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.701595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.701605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:36:15.262 [2024-07-25 09:50:15.701614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:36:15.262 [2024-07-25 09:50:15.701622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.701747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.701758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:36:15.262 [2024-07-25 09:50:15.701767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:36:15.262 [2024-07-25 09:50:15.701775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.701799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.701808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:36:15.262 [2024-07-25 09:50:15.701816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:36:15.262 [2024-07-25 09:50:15.701824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.701868] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:36:15.262 [2024-07-25 09:50:15.701877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.701889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:36:15.262 [2024-07-25 09:50:15.701897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:36:15.262 [2024-07-25 09:50:15.701904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.701951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:15.262 [2024-07-25 09:50:15.701960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:36:15.262 [2024-07-25 09:50:15.701967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:36:15.262 [2024-07-25 09:50:15.701975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:15.262 [2024-07-25 09:50:15.703226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1513.068 ms, result 0 00:36:15.262 [2024-07-25 09:50:15.718834] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:15.262 [2024-07-25 09:50:15.734824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:36:15.262 [2024-07-25 09:50:15.746096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:15.262 Validate MD5 checksum, iteration 1 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:15.262 09:50:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:36:15.522 [2024-07-25 09:50:15.894353] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:15.522 [2024-07-25 09:50:15.894536] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85759 ] 00:36:15.522 [2024-07-25 09:50:16.075736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.781 [2024-07-25 09:50:16.332089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:22.482  Copying: 608/1024 [MB] (608 MBps) Copying: 1024/1024 [MB] (average 628 MBps) 00:36:22.482 00:36:22.482 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:36:22.482 09:50:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:23.861 Validate MD5 checksum, iteration 2 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2d14400c1bbf6e8131efcca0ce213843 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2d14400c1bbf6e8131efcca0ce213843 != \2\d\1\4\4\0\0\c\1\b\b\f\6\e\8\1\3\1\e\f\c\c\a\0\c\e\2\1\3\8\4\3 ]] 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:36:23.861 09:50:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:36:23.861 [2024-07-25 09:50:24.185192] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:23.861 [2024-07-25 09:50:24.185337] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85843 ] 00:36:23.861 [2024-07-25 09:50:24.350411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.120 [2024-07-25 09:50:24.577179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:28.037  Copying: 613/1024 [MB] (613 MBps) Copying: 1024/1024 [MB] (average 599 MBps) 00:36:28.037 00:36:28.038 09:50:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:36:28.038 09:50:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:29.941 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:36:29.941 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f1280d6d679416f47f92616450bee902 00:36:29.941 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f1280d6d679416f47f92616450bee902 != \f\1\2\8\0\d\6\d\6\7\9\4\1\6\f\4\7\f\9\2\6\1\6\4\5\0\b\e\e\9\0\2 ]] 00:36:29.941 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:36:29.941 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:36:29.941 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85719 ]] 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85719 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 85719 ']' 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 85719 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85719 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85719' 00:36:29.942 killing process with pid 85719 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 85719 00:36:29.942 09:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 85719 00:36:31.319 [2024-07-25 09:50:31.693816] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:36:31.319 [2024-07-25 09:50:31.711645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.319 [2024-07-25 09:50:31.711694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:36:31.319 [2024-07-25 09:50:31.711706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:36:31.319 [2024-07-25 09:50:31.711714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.319 [2024-07-25 09:50:31.711737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:36:31.319 [2024-07-25 09:50:31.715664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.319 [2024-07-25 09:50:31.715699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:36:31.319 [2024-07-25 09:50:31.715710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.922 ms 00:36:31.319 [2024-07-25 09:50:31.715717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.319 [2024-07-25 09:50:31.715921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.319 [2024-07-25 09:50:31.715939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:36:31.319 [2024-07-25 09:50:31.715948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:36:31.319 [2024-07-25 09:50:31.715957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.319 [2024-07-25 09:50:31.718586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.319 [2024-07-25 09:50:31.718626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:36:31.319 [2024-07-25 09:50:31.718642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.617 ms 00:36:31.319 [2024-07-25 09:50:31.718650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.319 [2024-07-25 09:50:31.719588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.319 [2024-07-25 09:50:31.719614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:36:31.319 [2024-07-25 09:50:31.719623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.906 ms 00:36:31.319 [2024-07-25 09:50:31.719631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.319 [2024-07-25 09:50:31.735833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.735874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:36:31.320 [2024-07-25 09:50:31.735885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.185 ms 00:36:31.320 [2024-07-25 09:50:31.735893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.744427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.744464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:36:31.320 [2024-07-25 09:50:31.744475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.516 ms 00:36:31.320 [2024-07-25 09:50:31.744483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.744569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.744580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:36:31.320 [2024-07-25 09:50:31.744592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:36:31.320 [2024-07-25 09:50:31.744600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.759499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.759534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:36:31.320 [2024-07-25 09:50:31.759545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.911 ms 00:36:31.320 [2024-07-25 09:50:31.759552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.774591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.774627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:36:31.320 [2024-07-25 09:50:31.774637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.035 ms 00:36:31.320 [2024-07-25 09:50:31.774644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.789381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.789419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:36:31.320 [2024-07-25 09:50:31.789429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.734 ms 00:36:31.320 [2024-07-25 09:50:31.789436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.805219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.805293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:36:31.320 [2024-07-25 09:50:31.805305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.745 ms 00:36:31.320 [2024-07-25 09:50:31.805312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.805351] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:36:31.320 [2024-07-25 09:50:31.805366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:36:31.320 [2024-07-25 09:50:31.805377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:36:31.320 [2024-07-25 09:50:31.805385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:36:31.320 [2024-07-25 09:50:31.805394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:31.320 [2024-07-25 09:50:31.805544] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:36:31.320 [2024-07-25 09:50:31.805552] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2df06f8c-c9ac-42e1-b7ad-26fffdfd390a 00:36:31.320 [2024-07-25 09:50:31.805561] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:36:31.320 [2024-07-25 09:50:31.805568] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:36:31.320 [2024-07-25 09:50:31.805576] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:36:31.320 [2024-07-25 09:50:31.805584] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:36:31.320 [2024-07-25 09:50:31.805591] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:36:31.320 [2024-07-25 09:50:31.805603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:36:31.320 [2024-07-25 09:50:31.805612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:36:31.320 [2024-07-25 09:50:31.805618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:36:31.320 [2024-07-25 09:50:31.805625] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:36:31.320 [2024-07-25 09:50:31.805634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.805641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:36:31.320 [2024-07-25 09:50:31.805650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:36:31.320 [2024-07-25 09:50:31.805659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.825856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.825920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:36:31.320 [2024-07-25 09:50:31.825932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.195 ms 00:36:31.320 [2024-07-25 09:50:31.825967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.826485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:36:31.320 [2024-07-25 09:50:31.826503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:36:31.320 [2024-07-25 09:50:31.826511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.474 ms 00:36:31.320 [2024-07-25 09:50:31.826518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.888333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.320 [2024-07-25 09:50:31.888392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:36:31.320 [2024-07-25 09:50:31.888426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.320 [2024-07-25 09:50:31.888434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.888486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.320 [2024-07-25 09:50:31.888494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:36:31.320 [2024-07-25 09:50:31.888502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.320 [2024-07-25 09:50:31.888508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.888604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.320 [2024-07-25 09:50:31.888617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:36:31.320 [2024-07-25 09:50:31.888625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.320 [2024-07-25 09:50:31.888636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.320 [2024-07-25 09:50:31.888654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.320 [2024-07-25 09:50:31.888669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:36:31.320 [2024-07-25 09:50:31.888677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.320 [2024-07-25 09:50:31.888685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.009291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.009350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:36:31.580 [2024-07-25 09:50:32.009390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.009399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:36:31.580 [2024-07-25 09:50:32.116418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.116426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:36:31.580 [2024-07-25 09:50:32.116543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.116551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:36:31.580 [2024-07-25 09:50:32.116618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.116625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:36:31.580 [2024-07-25 09:50:32.116780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.116788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:36:31.580 [2024-07-25 09:50:32.116864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.116871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:36:31.580 [2024-07-25 09:50:32.116925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.116933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.116976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:36:31.580 [2024-07-25 09:50:32.116986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:36:31.580 [2024-07-25 09:50:32.116994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:36:31.580 [2024-07-25 09:50:32.117001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:36:31.580 [2024-07-25 09:50:32.117118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 406.222 ms, result 0 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:36:32.974 Remove shared memory files 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85498 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:36:32.974 00:36:32.974 real 1m38.117s 00:36:32.974 user 2m18.771s 00:36:32.974 sys 0m22.140s 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:32.974 09:50:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:36:32.974 ************************************ 00:36:32.974 END TEST ftl_upgrade_shutdown 00:36:32.974 ************************************ 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@14 -- # killprocess 78543 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@950 -- # '[' -z 78543 ']' 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@954 -- # kill -0 78543 00:36:32.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78543) - No such process 00:36:32.974 Process with pid 78543 is not found 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 78543 is not found' 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=85979 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:32.974 09:50:33 ftl -- ftl/ftl.sh@20 -- # waitforlisten 85979 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@831 -- # '[' -z 85979 ']' 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:36:32.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:36:32.974 09:50:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:33.233 [2024-07-25 09:50:33.636600] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:36:33.233 [2024-07-25 09:50:33.636750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85979 ] 00:36:33.233 [2024-07-25 09:50:33.805331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.493 [2024-07-25 09:50:34.032764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.431 09:50:34 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:36:34.431 09:50:34 ftl -- common/autotest_common.sh@864 -- # return 0 00:36:34.431 09:50:34 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:34.690 nvme0n1 00:36:34.690 09:50:35 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:34.690 09:50:35 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:34.690 09:50:35 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:34.950 09:50:35 ftl -- ftl/common.sh@28 -- # stores=76786d88-593c-4a5e-99c9-c9db9c43e5ac 00:36:34.950 09:50:35 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:34.950 09:50:35 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76786d88-593c-4a5e-99c9-c9db9c43e5ac 00:36:35.209 09:50:35 ftl -- ftl/ftl.sh@23 -- # killprocess 85979 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@950 -- # '[' -z 85979 ']' 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@954 -- # kill -0 85979 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@955 -- # uname 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85979 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85979' 00:36:35.209 killing process with pid 85979 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@969 -- # kill 85979 00:36:35.209 09:50:35 ftl -- common/autotest_common.sh@974 -- # wait 85979 00:36:37.748 09:50:38 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:38.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:38.008 Waiting for block devices as requested 00:36:38.008 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:38.266 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:38.266 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:38.526 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:43.801 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:43.801 09:50:43 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:43.801 Remove shared memory files 00:36:43.801 09:50:43 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:43.801 09:50:43 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:43.802 09:50:43 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:43.802 09:50:43 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:43.802 09:50:43 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:43.802 09:50:43 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:43.802 ************************************ 00:36:43.802 END TEST ftl 00:36:43.802 ************************************ 00:36:43.802 00:36:43.802 real 10m55.848s 00:36:43.802 user 13m51.071s 00:36:43.802 sys 1m16.440s 00:36:43.802 09:50:43 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:36:43.802 09:50:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:43.802 09:50:44 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:43.802 09:50:44 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:43.802 09:50:44 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:43.802 09:50:44 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:36:43.802 09:50:44 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:43.802 09:50:44 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:43.802 09:50:44 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:43.802 09:50:44 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:36:43.802 09:50:44 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:36:43.802 09:50:44 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:36:43.802 09:50:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:43.802 09:50:44 -- common/autotest_common.sh@10 -- # set +x 00:36:43.802 09:50:44 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:36:43.802 09:50:44 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:43.802 09:50:44 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:43.802 09:50:44 -- common/autotest_common.sh@10 -- # set +x 00:36:45.707 INFO: APP EXITING 00:36:45.707 INFO: killing all VMs 00:36:45.707 INFO: killing vhost app 00:36:45.707 INFO: EXIT DONE 00:36:45.708 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:46.276 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:46.276 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:46.276 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:46.276 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:46.844 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:47.103 Cleaning 00:36:47.103 Removing: /var/run/dpdk/spdk0/config 00:36:47.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:47.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:47.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:47.103 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:47.103 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:47.103 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:47.103 Removing: /var/run/dpdk/spdk0 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62051 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62295 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62516 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62626 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62687 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62821 00:36:47.103 Removing: /var/run/dpdk/spdk_pid62844 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63030 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63140 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63245 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63359 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63472 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63517 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63559 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63627 00:36:47.103 Removing: /var/run/dpdk/spdk_pid63747 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64193 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64268 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64342 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64364 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64512 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64533 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64682 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64703 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64773 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64791 00:36:47.103 Removing: /var/run/dpdk/spdk_pid64859 00:36:47.363 Removing: /var/run/dpdk/spdk_pid64884 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65071 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65113 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65194 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65372 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65473 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65515 00:36:47.363 Removing: /var/run/dpdk/spdk_pid65981 00:36:47.363 Removing: /var/run/dpdk/spdk_pid66085 00:36:47.363 Removing: /var/run/dpdk/spdk_pid66216 00:36:47.363 Removing: /var/run/dpdk/spdk_pid66275 00:36:47.363 Removing: /var/run/dpdk/spdk_pid66305 00:36:47.363 Removing: /var/run/dpdk/spdk_pid66382 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67025 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67067 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67555 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67654 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67781 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67834 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67865 00:36:47.363 Removing: /var/run/dpdk/spdk_pid67896 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69750 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69898 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69902 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69920 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69967 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69971 00:36:47.363 Removing: /var/run/dpdk/spdk_pid69983 00:36:47.363 Removing: /var/run/dpdk/spdk_pid70029 00:36:47.363 Removing: /var/run/dpdk/spdk_pid70033 00:36:47.363 Removing: /var/run/dpdk/spdk_pid70045 00:36:47.363 Removing: /var/run/dpdk/spdk_pid70116 00:36:47.363 Removing: /var/run/dpdk/spdk_pid70120 00:36:47.363 Removing: /var/run/dpdk/spdk_pid70132 00:36:47.363 Removing: /var/run/dpdk/spdk_pid71537 00:36:47.363 Removing: /var/run/dpdk/spdk_pid71640 00:36:47.363 Removing: /var/run/dpdk/spdk_pid73060 00:36:47.363 Removing: /var/run/dpdk/spdk_pid74457 00:36:47.363 Removing: /var/run/dpdk/spdk_pid74566 00:36:47.363 Removing: /var/run/dpdk/spdk_pid74673 00:36:47.363 Removing: /var/run/dpdk/spdk_pid74788 00:36:47.363 Removing: /var/run/dpdk/spdk_pid74915 00:36:47.363 Removing: /var/run/dpdk/spdk_pid75000 00:36:47.363 Removing: /var/run/dpdk/spdk_pid75140 00:36:47.363 Removing: /var/run/dpdk/spdk_pid75517 00:36:47.363 Removing: /var/run/dpdk/spdk_pid75559 00:36:47.363 Removing: /var/run/dpdk/spdk_pid76010 00:36:47.363 Removing: /var/run/dpdk/spdk_pid76196 00:36:47.363 Removing: /var/run/dpdk/spdk_pid76301 00:36:47.363 Removing: /var/run/dpdk/spdk_pid76422 00:36:47.363 Removing: /var/run/dpdk/spdk_pid76482 00:36:47.363 Removing: /var/run/dpdk/spdk_pid76513 00:36:47.363 Removing: /var/run/dpdk/spdk_pid77052 00:36:47.363 Removing: /var/run/dpdk/spdk_pid77118 00:36:47.363 Removing: /var/run/dpdk/spdk_pid77196 00:36:47.363 Removing: /var/run/dpdk/spdk_pid77600 00:36:47.363 Removing: /var/run/dpdk/spdk_pid77745 00:36:47.363 Removing: /var/run/dpdk/spdk_pid78543 00:36:47.363 Removing: /var/run/dpdk/spdk_pid78678 00:36:47.363 Removing: /var/run/dpdk/spdk_pid78896 00:36:47.363 Removing: /var/run/dpdk/spdk_pid79014 00:36:47.363 Removing: /var/run/dpdk/spdk_pid79377 00:36:47.363 Removing: /var/run/dpdk/spdk_pid79657 00:36:47.363 Removing: /var/run/dpdk/spdk_pid80053 00:36:47.363 Removing: /var/run/dpdk/spdk_pid80301 00:36:47.622 Removing: /var/run/dpdk/spdk_pid80455 00:36:47.623 Removing: /var/run/dpdk/spdk_pid80524 00:36:47.623 Removing: /var/run/dpdk/spdk_pid80662 00:36:47.623 Removing: /var/run/dpdk/spdk_pid80698 00:36:47.623 Removing: /var/run/dpdk/spdk_pid80767 00:36:47.623 Removing: /var/run/dpdk/spdk_pid80972 00:36:47.623 Removing: /var/run/dpdk/spdk_pid81287 00:36:47.623 Removing: /var/run/dpdk/spdk_pid81667 00:36:47.623 Removing: /var/run/dpdk/spdk_pid82022 00:36:47.623 Removing: /var/run/dpdk/spdk_pid82410 00:36:47.623 Removing: /var/run/dpdk/spdk_pid82844 00:36:47.623 Removing: /var/run/dpdk/spdk_pid82988 00:36:47.623 Removing: /var/run/dpdk/spdk_pid83075 00:36:47.623 Removing: /var/run/dpdk/spdk_pid83605 00:36:47.623 Removing: /var/run/dpdk/spdk_pid83670 00:36:47.623 Removing: /var/run/dpdk/spdk_pid84078 00:36:47.623 Removing: /var/run/dpdk/spdk_pid84432 00:36:47.623 Removing: /var/run/dpdk/spdk_pid84847 00:36:47.623 Removing: /var/run/dpdk/spdk_pid84975 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85034 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85108 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85171 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85247 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85498 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85579 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85652 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85719 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85759 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85843 00:36:47.623 Removing: /var/run/dpdk/spdk_pid85979 00:36:47.623 Clean 00:36:47.623 09:50:48 -- common/autotest_common.sh@1451 -- # return 0 00:36:47.623 09:50:48 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:36:47.623 09:50:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.623 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:36:47.623 09:50:48 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:36:47.623 09:50:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:47.623 09:50:48 -- common/autotest_common.sh@10 -- # set +x 00:36:47.882 09:50:48 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:47.882 09:50:48 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:47.882 09:50:48 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:47.882 09:50:48 -- spdk/autotest.sh@395 -- # hash lcov 00:36:47.882 09:50:48 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:47.882 09:50:48 -- spdk/autotest.sh@397 -- # hostname 00:36:47.882 09:50:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:47.882 geninfo: WARNING: invalid characters removed from testname! 00:37:14.493 09:51:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:15.096 09:51:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:17.012 09:51:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:19.597 09:51:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:21.504 09:51:22 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:24.040 09:51:24 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:25.945 09:51:26 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:25.945 09:51:26 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:25.945 09:51:26 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:25.945 09:51:26 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:25.945 09:51:26 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:25.945 09:51:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.945 09:51:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.945 09:51:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.945 09:51:26 -- paths/export.sh@5 -- $ export PATH 00:37:25.945 09:51:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:25.945 09:51:26 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:37:25.945 09:51:26 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:26.205 09:51:26 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721901086.XXXXXX 00:37:26.205 09:51:26 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721901086.FjyP8O 00:37:26.205 09:51:26 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:26.205 09:51:26 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:37:26.205 09:51:26 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:37:26.205 09:51:26 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:37:26.205 09:51:26 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:37:26.205 09:51:26 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:26.205 09:51:26 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:37:26.205 09:51:26 -- common/autotest_common.sh@10 -- $ set +x 00:37:26.205 09:51:26 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:37:26.205 09:51:26 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:26.205 09:51:26 -- pm/common@17 -- $ local monitor 00:37:26.205 09:51:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:26.205 09:51:26 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:26.205 09:51:26 -- pm/common@25 -- $ sleep 1 00:37:26.205 09:51:26 -- pm/common@21 -- $ date +%s 00:37:26.205 09:51:26 -- pm/common@21 -- $ date +%s 00:37:26.205 09:51:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721901086 00:37:26.205 09:51:26 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721901086 00:37:26.205 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721901086_collect-vmstat.pm.log 00:37:26.205 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721901086_collect-cpu-load.pm.log 00:37:27.145 09:51:27 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:27.145 09:51:27 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:37:27.145 09:51:27 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:37:27.145 09:51:27 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:27.145 09:51:27 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:27.145 09:51:27 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:27.145 09:51:27 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:27.145 09:51:27 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:27.145 09:51:27 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:27.145 09:51:27 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:27.145 09:51:27 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:27.145 09:51:27 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:27.145 09:51:27 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:27.145 09:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:27.145 09:51:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:37:27.145 09:51:27 -- pm/common@44 -- $ pid=87673 00:37:27.145 09:51:27 -- pm/common@50 -- $ kill -TERM 87673 00:37:27.145 09:51:27 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:27.145 09:51:27 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:37:27.145 09:51:27 -- pm/common@44 -- $ pid=87675 00:37:27.145 09:51:27 -- pm/common@50 -- $ kill -TERM 87675 00:37:27.145 + [[ -n 5359 ]] 00:37:27.145 + sudo kill 5359 00:37:27.156 [Pipeline] } 00:37:27.177 [Pipeline] // timeout 00:37:27.184 [Pipeline] } 00:37:27.202 [Pipeline] // stage 00:37:27.207 [Pipeline] } 00:37:27.224 [Pipeline] // catchError 00:37:27.235 [Pipeline] stage 00:37:27.238 [Pipeline] { (Stop VM) 00:37:27.252 [Pipeline] sh 00:37:27.538 + vagrant halt 00:37:30.087 ==> default: Halting domain... 00:37:38.224 [Pipeline] sh 00:37:38.506 + vagrant destroy -f 00:37:41.040 ==> default: Removing domain... 00:37:42.017 [Pipeline] sh 00:37:42.328 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:42.344 [Pipeline] } 00:37:42.396 [Pipeline] // stage 00:37:42.407 [Pipeline] } 00:37:42.515 [Pipeline] // dir 00:37:42.520 [Pipeline] } 00:37:42.538 [Pipeline] // wrap 00:37:42.544 [Pipeline] } 00:37:42.561 [Pipeline] // catchError 00:37:42.570 [Pipeline] stage 00:37:42.573 [Pipeline] { (Epilogue) 00:37:42.588 [Pipeline] sh 00:37:42.870 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:48.159 [Pipeline] catchError 00:37:48.161 [Pipeline] { 00:37:48.176 [Pipeline] sh 00:37:48.462 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:48.462 Artifacts sizes are good 00:37:48.471 [Pipeline] } 00:37:48.490 [Pipeline] // catchError 00:37:48.502 [Pipeline] archiveArtifacts 00:37:48.509 Archiving artifacts 00:37:48.644 [Pipeline] cleanWs 00:37:48.656 [WS-CLEANUP] Deleting project workspace... 00:37:48.656 [WS-CLEANUP] Deferred wipeout is used... 00:37:48.662 [WS-CLEANUP] done 00:37:48.664 [Pipeline] } 00:37:48.683 [Pipeline] // stage 00:37:48.689 [Pipeline] } 00:37:48.706 [Pipeline] // node 00:37:48.712 [Pipeline] End of Pipeline 00:37:48.748 Finished: SUCCESS